path
stringlengths
8
204
content_id
stringlengths
40
40
detected_licenses
list
license_type
stringclasses
2 values
repo_name
stringlengths
8
100
repo_url
stringlengths
27
119
star_events_count
int64
0
6.26k
fork_events_count
int64
0
3.52k
gha_license_id
stringclasses
10 values
gha_event_created_at
timestamp[ns]
gha_updated_at
timestamp[ns]
gha_language
stringclasses
12 values
language
stringclasses
1 value
is_generated
bool
1 class
is_vendor
bool
1 class
conversion_extension
stringclasses
6 values
size
int64
172
10.2M
script
stringlengths
367
7.46M
script_size
int64
367
7.46M
/Clustering Categorical Data - KMeans.ipynb
31aaa5127c2950781d6a105084ed490cf8432789
[]
no_license
2Animesh/Machine-Learning
https://github.com/2Animesh/Machine-Learning
0
0
null
2020-05-12T21:41:03
2020-05-08T08:39:44
Jupyter Notebook
Jupyter Notebook
false
false
.py
89,628
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Clustering Categorical Data # You are given much more country data. Using the same methodology as the one in the lecture, group all the countries in 2 clusters. # # <b> Already done that? Okay! </b> # # There are other features: name and continent. # # Encode the continent one and use it in the clustering solution. Think about the difference with the previous exercise. # ## Import the relevant libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set() # ## Load the data raw_data = pd.read_csv('Categorical.csv') raw_data.head() raw_data['continent'].unique() # ## Map the data # Use the <i>'continent'</i> category for this analysis. data_mapped = raw_data.copy() data_mapped['continent'] = data_mapped['continent'].map({'North America': 0, 'Asia': 1, 'Africa': 2, 'Europe':3, 'South America':4, 'Oceania': 5, 'Antarctica':6, 'Seven seas (open ocean)': 7}) data_mapped.head() # ## Select the features x = data_mapped.iloc[:, 3:] x.head() # ### WCSS (within cluster sum of squares) from sklearn.cluster import KMeans kmeans = KMeans(8) kmeans.fit(x) kmeans.inertia_ wcss = [] max_cluster_num = 9 # maximum no.of clusters we want + 1, here 9 as 8 continents for i in range(1, max_cluster_num): kmeans = KMeans(i) kmeans.fit(x) wcss.append(kmeans.inertia_) wcss # ### The Elbow Method plt.plot(range(1,max_cluster_num), wcss) plt.title('The Elbow Method') plt.xlabel('Number of clusters') plt.ylabel('Within-Cluster Sum of Squares') plt.show() # ## Clustering from sklearn.cluster import KMeans kmeans = KMeans(7) # ## Clustering results data_mapped['Clusters'] = kmeans.fit_predict(x) # ## Plot the data fig, ax = plt.subplots(1,2, sharey=True, figsize=(15,3)) ax[0].scatter(data_mapped['Longitude'], data_mapped['Latitude']) ax[0].set_title('Countries without clustering', size=15) ax[0].set_xlabel('Longitude') ax[0].set_ylabel('Latitude') ax[1].scatter(data_mapped['Longitude'], data_mapped['Latitude'], c=data_mapped['Clusters'], cmap='rainbow') ax[1].set_title('Countries with clustering', size=15) ax[1].set_xlabel('Longitude') plt.xlim(-180,180) plt.ylim(-90,90) plt.show()
2,573
/Olympic stuff.ipynb
3f9add8107686b336f337ca72780f96abea74c34
[]
no_license
yashas1145/JSON-Based-Inventory-Management
https://github.com/yashas1145/JSON-Based-Inventory-Management
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
9,894
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ChEn-3170 Fall 2021 UMass Lowell; Prof. V. F. de Almeida **01Sep2021** # # # ChEn-3170 Computational Methods in Chemical Engineering – Course Syllabus # $ # \newcommand{\Amtrx}{\boldsymbol{\mathsf{A}}} # \newcommand{\Bmtrx}{\boldsymbol{\mathsf{B}}} # \newcommand{\Mmtrx}{\boldsymbol{\mathsf{M}}} # \newcommand{\Imtrx}{\boldsymbol{\mathsf{I}}} # \newcommand{\Pmtrx}{\boldsymbol{\mathsf{P}}} # \newcommand{\Qmtrx}{\boldsymbol{\mathsf{Q}}} # \newcommand{\Lmtrx}{\boldsymbol{\mathsf{L}}} # \newcommand{\Umtrx}{\boldsymbol{\mathsf{U}}} # \newcommand{\Smtrx}{\boldsymbol{\mathsf{S}}} # \newcommand{\xvec}{\boldsymbol{\mathsf{x}}} # \newcommand{\avec}{\boldsymbol{\mathsf{a}}} # \newcommand{\bvec}{\boldsymbol{\mathsf{b}}} # \newcommand{\cvec}{\boldsymbol{\mathsf{c}}} # \newcommand{\rvec}{\boldsymbol{\mathsf{r}}} # \newcommand{\mvec}{\boldsymbol{\mathsf{m}}} # \newcommand{\gvec}{\boldsymbol{\mathsf{g}}} # \newcommand{\zerovec}{\boldsymbol{\mathsf{0}}} # \newcommand{\norm}[1]{\bigl\lVert{#1}\bigr\rVert} # \DeclareMathOperator{\rank}{rank} # $ # ### Instructor: Prof. Valmor F. de Almeida # **Office:** Southwick Hall 202B North Campus <br> # **Email:** [email protected]. <br> # **Web:** https://www.uml.edu/Engineering/Chemical/faculty/de-Almeida-Valmor.aspx <br> # **Lectures:** # + **North Campus Kitson Hall 310**, Mon, Wed 1:00 – 1:50 pm. <br> # + **North Campus Southwick Hall 317**, Thu 11:00 am – 12:50 am (section 801A), 2:00 pm – 3:50 pm (section 802A) # # **Days meetings total Mon:** 12. <br> # **Days meetings total Wed:** 14. <br> # **Days meetings total Thu:** 2 x 13. <br> # **Week meetings total:** 15. <br> # <br> # <span style="color:red"><b>COVID-19 note:</b></span> In the event of COVID-19 related issues during the semester, this course may use virtual lectures [repository](https://github.com/dpploy/chen-3170) using the UMass Lowell Blackboard System. <br> # <br> # <span style="color:red"><b>Website:</b></span> On-line course [repository](https://github.com/dpploy/chen-3170) and UMass Lowell Blackboard System. <br> # **Office hours and location:** meet at 2:00 – 3:00 pm, Mon/Wed and 4:00 - 5:00 pm, Thu in **Southwick Hall 202B**. <br> # **Additional office hours:** email prof. de Almeida. <br> # <span style="color:red"><b>Computational support:</b></span> email prof. de Almeida for group sessions. <br> # **Teaching assistant:** TBA (email: [email protected]). <br> # **Catalog description (proposed):** <br> # The goal of this course is to present to students of chemical engineering an interconnected set of computational methods needed in the core undergraduate chemical engineering curriculum. In particular, methods that assist the students in solving problems in the core areas of chemical reaction equilibria, separations, unit operations, and chemical reactor engineering with applications in nuclear and biochemical engineering.<br> # **Prerequisites:** Math-2340 Differential Equations or Math-2360 Engineering Differential Equations, ChEn-3030 Fluid Mechanics.<br> # **Co-requisites:** none.<br> # **Course designation:** required. <br> # <span style="color:red"><b>Suggested:</b></span> [ChEn-1070](https://github.com/dpploy/chen-1070) Introduction to Chemical Engineering, ChEn-2020 Chemical Eng. Thermodynamics, *ChEn-3100 Separation Processes*, ChEn-3110 Phase and Chemical Equilibria, *ChEn-4030 Chemical Reaction Engineering*. <br> # **References:**<br> # There is no required textbook. Course notes are provided. Here are some suggested references for those who ask: # 1. *ChEn-3170 Course Notes* OneNote links in the course [repository](https://github.com/dpploy/chen-3170) notebooks. # 2. *Python 3 Language Programming*, [ref 1](https://www.python.org/), [ref 2](https://wiki.python.org/moin/BeginnersGuide), [ref 3](https://www.python.org/doc/), and [ref 4](https://docs.python.org/3.6/contents.html). # 3. [*Jupyter Notebook*](http://jupyter.org/). # # <span style="color:red"><b>Software needed for course and labwork:</b></span> Python system and Jupyter Notebook to be obtained freely<br> # # 1. <span style="color:red"><b>Preferred way:</b></span> [Anaconda](https://docs.anaconda.com/anaconda/install/#) free download (use the Python 3 version) for Mac OS X or Windows (Linux too in case you are a rare die-hard programmer; if this is your case please come talk to me). After install, use [Anaconda-Navigator](https://docs.anaconda.com/anaconda/navigator/) to start a Jupyter Notebook server (more information is available in this course [repository](https://github.com/dpploy/chen-3170)). # # 1. Use Binder at the course [repository](https://github.com/dpploy/chen-3170) (see instructions at the site). # # 1. Use the [UMass Lowell vLabs](https://www.uml.edu/IT/Services/vLabs/) Learning Commons machine. Download the VMWare Horizon Client and install on your computer. Login with your academic credentials. Anaconda is pre-installed, use the Anaconda-Navigator to start the Jupyter Notebook application. # # 1. Refer to [01-Introduction Notebook](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/01-introduction.ipynb) in the course repository. # # **Suplement materials:** On-line course [repository for this course](https://github.com/dpploy/chen-3170) and UMass Lowell Blackboard System. # **Course topics:** # - Jupyter Notebook and Python programming language # - Computational stoichiometry (linear algebra and least-squares method) # - Data approximation I (general linear least-squares method: Fourier and Wavelet basis functions) # - Chemical equilibria (non-linear algebraic systems) # - Data approximation II (non-linear least-squares method) # - Chemical/Nuclear kinetics (time-dependent chemical/nuclear 0-D reactors, *e.g.* CSTR, semi-batch, batch, reactors) # # [Overall](https://studentuml-my.sharepoint.com/:o:/g/personal/valmor_dealmeida_uml_edu/EkrBnydkYmNNjRTj3KQyx9YBlYE1uN8ryzNJfZZQNI81Bw?e=fe220R), [this course](https://github.com/dpploy/chen-3170) combines computational mathematics, computer programming (in Python programming language), and core elements of chemical engineering as listed above. # # **Grading notes:** for grading purposes the requirements for this course include: # + Hands-on laboratory work assignments (Jupyter Notebook in Python submitted online via Blackboard at designated due date on Blackboard <span style="color:red"><b>every week</b></span>). # + Late submission (1 day) will be graded at 80%. Later submission at smaller percentage. # + As a bonus for those interested in improving their grade, an on-line oral exam can be scheduled by contacting me. The oral exam is to be taken in the week of finals on the day scheduled by SIS. # # The electronic laboratory notebook will be collected and graded for technical content, style and grammar, and for overall professional appearance. The graded laboratory work represents a significant part of the evaluation process for this course since it is expected that a large part of the learning of the course material will be associated with the student’s effort on the laboratory work assignments. # <span style="color:red"><b>All assignments are individual</b></span>. Make up of late laboratory work assignments will be resolved on a case-by-case basis if there is enough evidence of a special circumstance. # # Announcements and submission of laboratory work will be made via **Blackboard** distributed to the students. # # |**Course Grading Procedure** |**Value**| # |:----------------------------|:-------:| # | Laboratory work | 100/100 | # | Chance for grade improvement (by appointment): bonus oral exam | up to additional 10/100 | # # # |**Letter Grade Scale** |**Value**| # |:----------------------|:-------:| # | A | 92+ | # | A- | 87–91.9 | # | B+ | 82–86.9 | # | B | 77–81.9 | # | B- | 72–76.9 | # | C+ | 67–71.9 | # | C | 62–66.9 | # | C- | 58–61.9 | # | D+ | 54–57.9 | # | D | 50–53.9 | # | F | <50 | # **Course outcomes:** Upon completion of this course, a student should be able to # 1. Write and execute computer programs for various algorithms covered in the course. # 1. Understand stoichiometry from a computational perspective and be able to compute chemical reaction rates from species production rates using linear algebra algorithms including the linear least-squares method. # 1. Use the general linear least-squares method to approximate reaction rate constant parameters. # 1. Solve nonlinear algebraic equations derived from chemical equilibria of multiple reactions. # 1. Use the nonlinear least-squares method to approximate data to nonlinear models. # 1. Solve coupled system of ordinary differential equations for non-isothermal chemical/nuclear reactor dynamics. # 1. Program algorithms in the Python language and demonstrate problem solving abilities using Jupyter electronic notebooks. # 1. Produce professional, all-inclusive eletronic notebook reports combining textual content, program, results and analysis. # # **Relation of course outcomes to ABET Criterion 3:** # # |**Course Outcomes** |**ABET Criterion 3**| # |:----------------------|:-------:| # | 1 | a, e | # | 2 | b | # | 3 | k | # | 4 | k | # | 5 | a, k | # | 6,7 | a, b | # | 8 | a, b, k, e | # # # **Schedule (updated weekly):** # # Labwork listed below will be available at the day of the labwork. <span style="color:red"><b>Previous years labwork</b></span> can be obtained in [the course repository](https://github.com/dpploy/chen-3170) under [releases](https://github.com/dpploy/chen-3170/releases). Also, syllabi from past offerings of this course are included in the releases. # # |**Week**|Day| **Date** |**Notebook**|**Assessment**|**Note**| # |:-------|:-:|:---------:|:-----------|:-------------|:------:| # |- |- | <span style="color:blue"><b>September</b>|- |- |- | # | **1** | **W** |**01Sep21**| [01](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/01-introduction.ipynb) | - | Syllabus/Intro Jupyter Notebook & Python | # | 1 | Th | 02Sep21 | [01](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/01-introduction.ipynb)/[02](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/02-variables-types-structures.ipynb) | [Labwork-01](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/labwork-01.ipynb) due 16Sep21 BB submission | Primitive Data Types | # | **~2~** | **~M~** | **~06Sep21~** | No class | No class | <span style="color:blue"><b>Labor Day Holiday</b>| # | 2 | W | 08Sep21 | [02](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/02-variables-types-structures.ipynb) | - | Primitive Data Types | # | 2 | Th | 09Sep21 | [02](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/02-variables-types-structures.ipynb) | - | Built-in Data Structures | # | **3** | **M** | **13Sep21** | [03](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/03-arrays.ipynb) | - | Arrays (Vectors) | # | 3 | W | 15Sep21 | [03](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/03-arrays.ipynb) | - | Arrays (Vectors) | # | 3 | Th | 16Sep21 | [03](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/03-arrays.ipynb) | [Labwork-02](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/labwork-02.ipynb) due 30Sep21 BB submission | Arrays (Matrices/Bricks) | # | **4** | **M** | **20Sep21** | [03](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/03-arrays.ipynb) | - | Arrays (Bricks) | # | 4 | W | 22Sep21 | [03](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/03-arrays.ipynb)/[04](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/04-array-operations.ipynb) | - | Arrays (Bricks)/Array Operations | # | 4 | T | 23Sep21 | [04](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/04-array-operations.ipynb)/[06](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/06-linear-algebra-fundamentals.ipynb) | - | Computational Stoichiometry (Linear Algebra Basics)/Notes | # | **5** | **M** | **27Sep21** | [06](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/06-linear-algebra-fundamentals.ipynb) | - | Computational Stoichiometry (Linear Algebra Basics)/Notes | # | 5 | W | 29Sep21 | [06](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/06-linear-algebra-fundamentals.ipynb) | - | Computational Stoichiometry (Linear Algebra Basics)/Notes | # | 5 | Th | 30Sep21 | [06](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/06-linear-algebra-fundamentals.ipynb) | [Labwork-03](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/labwork-03.ipynb) due 14Oct21 BB submission | Computational Stoichiometry (Linear Algebra Basics)/Notes | # |**Week**|Day| **Date** |**Notebook**|**Assessment**|**Note**| # |:-------|:-:|:---------:|:-----------|:-------------|:------:| # |- |- | <span style="color:blue"><b>October</b>|- |- |- | # |**Week**|Day| **Date** |**Notebook**|**Assessment**|**Note**| # |:-------|:-:|:---------:|:-----------|:-------------|:------:| # |- |- | <span style="color:blue"><b>November</b>|- |- |- | # | **10** | **Mon** |**01Nov21**| TBA | TBA | TBA | # | 10 | Wed |03Nov21| TBA | TBA | TBA | # | 10 | Thu |04Nov21| TBA | [Labwork-05](https://nbviewer.jupyter.org/github/dpploy/chen-3170/blob/master/notebooks/labwork-05.ipynb) due 12Nov21 BB submission | TBA | # | **~11~** | **~Mon~** |**~08Nov21~**| No class | No class | AIChE meeting | # | ~11~ | ~Wed~ |~10Nov21~| No class | No class | AIChE meeting | # | ~11~ | ~Th~ |~11Nov21~| No class | No class | Veteran's Day Holiday | # |**Week**|Day| **Date** |**Notebook**|**Assessment**|**Note**| # |:-------|:-:|:---------:|:-----------|:-------------|:------:| # |- |- | <span style="color:blue"><b>December</b>|- |- |- | # ### General Information # # **Attendance:** *All* Students are expected to attend *all* classes. # # **Mobile device policy:** Mobile devices are **needed** for programming and following the material presented online (in this Virtual version of the course); see classroom/online conduct below. # # **Credit hour policy:** A credit hour requires a minimum of 2 hours of out-of-class student **deep work** per 1 hour of instructor-led course activity. # # **Classroom/Online Conduct:** <span style="color:red">Students are expected to exhibit professional and respectful behavior that is conducive to a mutually beneficial learning environment in the classroom/online. Examples of inappropriate behavior include: text messaging, listening to music, cell phone use (other than the campus alert system), late arrivals, early departures, use of laptops for other than class purposes, disrespectful comments or behavior, intentional disruptions, failure to follow faculty directives, etc. Students in violation of these standards may be asked to leave class (or virtual class) and/or be referred to the Dean of Students for disciplinary action.</span> # # **Academic Integrity:** <span style="color:red">Cheating and plagiarism will not be tolerated. A first offense will result in a failing grade for the assignment/exam in question and a formal filing with the Office of Provost according to the Academic Integrity Policy. A second offense could lead to a failing grade in the course, suspension or expulsion, as detailed in the policy, defined [here](https://www.uml.edu/Catalog/Undergraduate/Policies/Academic-Policies/Academic-Integrity.aspx).</span> # # **Instructional Resources:** The Centers for Learning and Academic Support Services provide many tutoring resources; more details are available [here](https://www.uml.edu/class/) # Technology Resources: For a listing of available computing and software resources available to students, [visit here]( https://www.uml.edu/IT/Services/DLC/). # # **Accommodations:** In accordance with University policy and the ADA, accommodations are provided for students with documented disabilities. If you have a disability, please contact the Office of Disability Services as soon as possible. Their office is in UC 220 (978-934-4574, [email protected]). Documentation of disability is confidential. Requests for accommodation for religious reasons should be directed to Equal Opportunity and Outreach at 978-934-3565, Wannalancit Mills, Suite 301. # # **Counseling Services:** As part of the Wellness Center, Counseling Services at UMass Lowell provide mental health counseling, consultation and referrals to help students achieve personal and academic success. They also assist students in better understanding and coping with their feelings, relationships, and choices surrounding their academic success. [Visit](https://www.uml.edu/student-services/Counseling/) # Veterans’ Services: UMass Lowell is committed to helping our military students take full advantage of all the educational benefits available through the federal and state governments. For complete information on the services and resources available please visit our [website](https://www.uml.edu/student-services/Veterans/). # University Cancellation Information: If campus is closed (most likely for weather), visit the website for announcements relevant to the class.
18,467
/Project/Lookahead/Lookahead/CIFAR_main jupyter.ipynb
e8c5b398fdcf866cb9fdb80535f45e9cfd777d0a
[]
no_license
johnnyvjoyce/Deep-Learning-Soton-Course
https://github.com/johnnyvjoyce/Deep-Learning-Soton-Course
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
1,303,313
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import torchvision import torchvision.transforms as transforms from lookahead_pytorch import Lookahead from CIFAR.model import ResNet18 import torch from torch import nn import numpy as np import torchbearer NB_EPOCHS = 200 NB_TRIALS = 3 DATASET = 'CIFAR100' # or 'CIFAR10' # Data setup if DATASET == 'CIFAR10': trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True) elif DATASET == 'CIFAR100': trainset = torchvision.datasets.CIFAR100(root='./data', train=True, download=True) else: raise ValueError("Dataset not setup") channel_means = [np.mean(trainset.data[:,:,:,i]) for i in range(3)] channel_stds = [np.std(trainset.data[:,:,:,i]) for i in range(3)] # Transforms train_transform = transforms.Compose( [transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=[x / 255.0 for x in channel_means], std=[x / 255.0 for x in channel_stds])]) test_transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize(mean=[x / 255.0 for x in channel_means], std=[x / 255.0 for x in channel_stds])]) if DATASET == 'CIFAR10': trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=train_transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=test_transform) elif DATASET == 'CIFAR100': trainset = torchvision.datasets.CIFAR100(root='./data', train=True, download=True, transform=train_transform) testset = torchvision.datasets.CIFAR100(root='./data', train=False, download=True, transform=test_transform) # Data loaders trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True) valloader = torch.utils.data.DataLoader(testset, batch_size=128, shuffle=False) device = "cuda:0" results = [] def train(optimizer_name): scheduler = torchbearer.callbacks.torch_scheduler.MultiStepLR(milestones=[60, 120, 160], gamma=0.2) if DATASET == 'CIFAR100': model = ResNet18(100) else: model = ResNet18() checkpoint = torchbearer.callbacks.ModelCheckpoint(DATASET + "\\" + str(trial_num) + "\\" + optimizer_name + '_checkpoint.pt') logger = torchbearer.callbacks.CSVLogger(DATASET + "\\" + str(trial_num) + "\\" + optimizer_name + '_log.pt', separator=',', append=True) if optimizer_name == 'SGD': optimizer = torch.optim.SGD(model.parameters(), lr=0.05, momentum=0.9, weight_decay=0.001) elif optimizer_name == 'Lookahead': optimizer = Lookahead(torch.optim.SGD(model.parameters(), lr=0.1), la_alpha=0.8, la_steps=5) elif optimizer_name == 'AdamW': optimizer = torch.optim.AdamW(model.parameters(), lr=3e-4, weight_decay=1) elif optimizer_name == 'Polyak': optimizer = torch.optim.ASGD(model.parameters(), lr=0.3, weight_decay=0.001) elif optimizer_name == 'Adam': optimizer = torch.optim.Adam(model.parameters()) elif optimizer_name == 'Lookahead(Adam)': optimizer = Lookahead(torch.optim.Adam(model.parameters())) else: raise ValueError("Optimizer not setup") loss_function = nn.CrossEntropyLoss() trial = torchbearer.Trial(model, optimizer, loss_function, metrics=['loss', 'accuracy', 'top_5_acc'], callbacks=[scheduler, checkpoint, logger]).to(device) trial.with_generators(trainloader, val_generator=valloader) results.append(trial.run(epochs=NB_EPOCHS)) # Run optimizer_names = ['SGD', 'Lookahead', 'AdamW', 'Polyak', 'Adam', 'Lookahead(Adam)'] # optimizer_names = ['Adam', 'Lookahead(Adam)'] for trial_num in range(NB_TRIALS): torch.manual_seed(trial_num+1) for i, opt in enumerate(optimizer_names): train(opt) # Test Plot import matplotlib.pyplot as plt import pandas as pd plt.figure() for opt_name, result in zip(optimizer_names, results): plt.plot(pd.DataFrame(result)['val_loss'], label=opt_name) # pd.DataFrame(result).to_csv("results_"+opt_name) plt.grid(True) plt.legend() plt.savefig(DATASET + "\\" + str(trial_num) + '\\loss_plot.png') # - import matplotlib.pyplot as plt import pandas as pd plt.figure() for opt_name, result in zip(optimizer_names, results): plt.plot(pd.DataFrame(result)['val_loss'], label=opt_name) pd.DataFrame(result).to_csv(DATASET + "\\" + str(trial_num) + "\\results_"+opt_name+".csv") plt.xlabel("Epoch") plt.ylabel("Loss") plt.grid(True) plt.legend() plt.savefig(DATASET + "\\" + str(trial_num) + '\\loss_plot2.png') for trial_num in [1,2]: torch.manual_seed(trial_num+1) for i, opt in enumerate(optimizer_names): train(opt) # Test Plot import matplotlib.pyplot as plt import pandas as pd plt.figure() for opt_name, result in zip(optimizer_names, results): plt.plot(pd.DataFrame(result)['val_loss'], label=opt_name) pd.DataFrame(result).to_csv(DATASET + "\\" + str(trial_num) + "\\results_"+opt_name+".csv") plt.grid(True) plt.legend() plt.savefig(DATASET + "\\" + str(trial_num) + '\\loss_plot.png') e-dataset-to-a-shapefile:) for more info. #Convert the FeatureSet to a Spatial DataFrame nc_Substations_sdf = nc_Substations.sdf type(nc_Substations_sdf) # Now that we have a spatial dataframe, we can save it using the SpatialDataFrame's [to_featureclass()](https://developers.arcgis.com/python/api-reference/arcgis.features.toc.html#arcgis.features.SpatialDataFrame.to_featureclass) method. # + #Create a new folder to hold the feature class import os if not os.path.exists('HIFLD'): os.mkdir("HIFLD") #Export the data to a feature class called "Substations.shp" in the HIFLD folder nc_Substations_sdf.spatial.to_featureclass(location='./HIFLD/Substations.shp') # - #Zip the folder import shutil shutil.make_archive(base_name='HIFLD',format='zip',root_dir='./HIFLD')
6,197
/Capstone_Project/.ipynb_checkpoints/Capstone_Project_Proposal-checkpoint.ipynb
cfd5293aa96933a79de32344606e13b145bc74ff
[]
no_license
just4jc/SPRINGBOARD
https://github.com/just4jc/SPRINGBOARD
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
5,477
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd from pathlib import Path import random from tqdm import tqdm from category_encoders.cat_boost import CatBoostEncoder from sklearn.ensemble import RandomForestRegressor from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from sklearn.model_selection import StratifiedKFold # - SEED = 14300631 N_FOLDS = 5 random.seed(SEED) np.random.seed(SEED) raw_data_dir = Path('../data/raw') raw_train = pd.read_csv(raw_data_dir / 'train.csv', sep=';', parse_dates=['creation_date', 'modification_date', 'publish_date']) raw_test = pd.read_csv(raw_data_dir / 'test.csv', sep=';', parse_dates=['creation_date', 'modification_date', 'publish_date']) raw_education = pd.read_csv(raw_data_dir / 'education.csv', sep=';') raw_employements = pd.read_csv(raw_data_dir / 'employements.csv', sep=';') raw_worldskills = pd.read_csv(raw_data_dir / 'worldskills.csv', sep=';') # ### Preprocess # + def filter_experience(x): if np.isnan(x) or x > 50: return np.nan return x def filter_age(x): if np.isnan(x) or x < 14 or x > 83: return np.nan return x # - def preprocess_train_test(df): df['publish_year'] = df['publish_date'].dt.year all_drive_licences = ['A', 'B', 'C', 'D', 'E'] for licence_type in all_drive_licences: df[f'drive_licences_{licence_type}'] = df['drive_licences'].fillna('').apply(lambda x: int(licence_type in x)) all_schedules = [ ('vahta', 'Вахтовый метод'), ('gibkiy', 'Гибкий график'), ('nenorm', 'Ненормированный рабочий день'), ('nepoln', 'Неполный рабочий день'), ('poln', 'Полный рабочий день'), ('smena', 'Сменный график'), ] for schedule_label, schedule_type in all_schedules: df[f'schedule_{schedule_label}'] = df['schedule'].apply(lambda x: int(schedule_type in x)) df['experience'] = df['experience'].apply(filter_experience) df['age'] = df['age'].apply(filter_age) df = df.drop([ 'locality', 'position', 'locality_name','drive_licences', 'schedule', 'is_worldskills_participant', 'has_qualifications', 'creation_date', 'modification_date','publish_date', ], axis=1) if 'salary' in df.columns: df['salary'] = np.log(df['salary'] + 1) df['salary_desired'] = np.log(df['salary_desired'] + 1) df['region'] = df['region'].astype('category') df['education_type'] = df['education_type'].astype('category') df['industry'] = df['industry'].astype('category') df['citizenship'] = df['citizenship'].astype('category') df['employement_type'] = df['employement_type'].astype('category') df['gender'] = df['gender'].astype('category') df['relocation_ready'] = df['relocation_ready'].astype('boolean') df['travel_ready'] = df['travel_ready'].astype('boolean') df['retraining_ready'] = df['retraining_ready'].astype('boolean') return df def preprocess_education(df): df['graduation_year'] = df['graduation_year'].astype('category') df['institution'] = df['institution'].str.lower().str.replace('\"', '').astype('category') df = df.drop('description', axis=1) return df def preprocess_employements(df): df['employer'] = df['employer'].str.lower().str.replace('\"', '').astype('category') df['position'] = df['position'].str.lower().str.replace('\"', '').astype('category') df['start_date'] = pd.to_datetime(df['start_date'], errors='coerce') df['finish_date'] = pd.to_datetime(df['finish_date'], errors='coerce') df['work_duration'] = df['finish_date'] - df['start_date'] df['work_duration'] = df['work_duration'].dt.days df = df.drop(['achievements', 'responsibilities', 'start_date', 'finish_date'], axis=1) return df def preprocess_worldskills(df): df['status'] = df['status'].astype('category') df['int_name'] = df['int_name'].astype('category') df['ru_name'] = df['ru_name'].astype('category') df['code'] = df['code'].astype('category') df['is_international'] = df['is_international'].astype('boolean') return df train = preprocess_train_test(raw_train) test = preprocess_train_test(raw_test) education = preprocess_education(raw_education) employements = preprocess_employements(raw_employements) worldskills = preprocess_worldskills(raw_worldskills) full_train = pd.merge(train, education, how='left', on='id') full_train = pd.merge(full_train, employements, how='left', on='id') full_train = pd.merge(full_train, worldskills, how='left', on='id') full_test = pd.merge(test, education, how='left', on='id') full_test = pd.merge(full_test, employements, how='left', on='id') full_test = pd.merge(full_test, worldskills, how='left', on='id') new_drop_columns = ['status', 'code', 'is_international', 'int_name', 'ru_name'] full_train = full_train.drop(new_drop_columns, axis=1) full_test = full_test.drop(new_drop_columns, axis=1) X_test = full_test.drop(['id'], axis=1) X_test.head(3) # + cat_columns = full_test.select_dtypes(include=['category', 'boolean']).columns cat_encoder = CatBoostEncoder( cols=cat_columns, ) skf = StratifiedKFold(n_splits=N_FOLDS, shuffle=True, random_state=SEED) # - # ### Training - RandomForest (LB - 1.00888) # + cv_metrics = [] test_preds = [] idx = 0 for train_indexes, val_indexes in tqdm(skf.split(full_train, full_train['publish_year'])): print(idx) idx += 1 X_train = full_train.loc[train_indexes] y_train = X_train['salary'] X_train = X_train.drop(['id', 'salary'], axis=1) X_train = cat_encoder.fit_transform(X_train, y_train) X_val = full_train.loc[val_indexes] y_val = X_val['salary'] X_val = X_val.drop(['id', 'salary'], axis=1) X_val = cat_encoder.transform(X_val) X_test_temp = cat_encoder.transform(X_test) for col in X_train.columns: mean_value = X_train[col].mean() X_train[col] = X_train[col].fillna(mean_value) X_val[col] = X_val[col].fillna(mean_value) X_test_temp[col] = X_test_temp[col].fillna(mean_value) model = RandomForestRegressor(random_state=SEED, n_jobs=-1) model.fit( X_train, y_train, ) val_pred = model.predict(X_val) val_score = mean_squared_error(y_val, val_pred) cv_metrics.append(val_score) print(val_score) test_pred = model.predict(X_test_temp) test_preds.append(np.exp(test_pred) - 1) # - # Остановка раньше времени, так как кончалось время до обновления количества посылок # Local validation np.mean(cv_metrics) len(test_preds) test_pred = np.array(test_preds).mean(axis=0) test_pred new_test = full_test[['id']] new_test['salary'] = test_pred new_test.head() new_test.to_csv('rf_5_fold.csv', index=False) # LB - 1.00888 # ### Training - LinearRegression (LB: 1.01838) # + cv_metrics = [] test_preds = [] idx = 0 for train_indexes, val_indexes in tqdm(skf.split(full_train, full_train['publish_year'])): print(idx) idx += 1 X_train = full_train.loc[train_indexes] y_train = X_train['salary'] X_train = X_train.drop(['id', 'salary'], axis=1) X_train = cat_encoder.fit_transform(X_train, y_train) X_val = full_train.loc[val_indexes] y_val = X_val['salary'] X_val = X_val.drop(['id', 'salary'], axis=1) X_val = cat_encoder.transform(X_val) X_test_temp = cat_encoder.transform(X_test) for col in X_train.columns: mean_value = X_train[col].mean() X_train[col] = X_train[col].fillna(mean_value) X_val[col] = X_val[col].fillna(mean_value) X_test_temp[col] = X_test_temp[col].fillna(mean_value) model = LinearRegression(n_jobs=-1) model.fit( X_train, y_train, ) val_pred = model.predict(X_val) val_score = mean_squared_error(y_val, val_pred) cv_metrics.append(val_score) print(val_score) test_pred = model.predict(X_test_temp) test_preds.append(np.exp(test_pred) - 1) # - #Local validation np.mean(cv_metrics) len(test_preds) test_pred = np.array(test_preds).mean(axis=0) test_pred new_test = full_test[['id']] new_test['salary'] = test_pred new_test.head() new_test.to_csv('lin_reg_5_fold.csv', index=False) # + active="" # LB: 1.01838 alues df_all[df_all.isnull().any(axis=1)] # - df_all.mean() # + # You could run this code below, which would fill the missing Weight values with the whole # column means. Is this a good idea? Why, why not? #df_all["Weight"].fillna(df_all["Weight"].mean()) # - df_all.groupby("Gender").mean() df_all.Weight.fillna(166.444444, inplace=True) df_all.Height.fillna(413.052632, inplace=True) df_all.head() # + # Now that we have filled the missing values, you will see that there are no rows with # missing values df_all[df_all.isnull().any(axis=1)] # Can anyone guess what the axis argument is doing? # + # Now consider if you had a HUGE dataset with multiple columns and multiple groups (i.e # more than just male and female). It would take time to fillna df_all["Weight"].fillna(df_all.groupby("Gender")["Weight"].transform("mean"), inplace=True) # - df_all[df_all.isnull().any(axis=1)] # + # And now lets fill missing Height values df_all["Height"].fillna(df_all.groupby("Gender")["Height"].transform("mean"), inplace=True) # + # And voila... no more missing values! df_all[df_all.isnull().any(axis=1)] # - # #### Aggregating data # # One of my favourite pandas functions is the groupby() function. # + # Experiment with .median, .describe, consider adding other categorical variables in the # groupby and then aggregating. # Introduce the .to_clipboard() function, and the transpose function, which are useful # when writing a paper. df_all.groupby("Gender").mean()#.T # + # What if you want to investigate two categorical variables? cols_to_grpby = ['Gender', 'Diagnosed'] df_all.groupby(cols_to_grpby).mean() # - df_all.groupby(cols_to_grpby).median() df_all.groupby(cols_to_grpby).describe().T # + # Prepare your tables for publication? # I copy to clipboard and then format in Excel.. df_all.groupby(cols_to_grpby).describe().T.to_clipboard() # - # ## Data Visualisation # Start with basic bar/line plots, learning seaborn methods, and then something interactive # + #we need to import more libraries, notably matplotlib and seaborn import matplotlib.pyplot as plt from IPython.display import display import seaborn as sns #note the magic function below. This is important because it will allow the plots you #create to appear here in the notebook. # %matplotlib inline # - xx = df_all["FSIQ"] # you can replace FSIQ with any other column.. try it plt.hist(xx, bins = 10); # Now lets consider the seaborn library, which we will use as seaborn is generally easier, and make the images much better! sns.distplot(df_all['Weight'], kde=False, rug=True, bins = 10) # We want to investigate: # # # (1) Differences in IQ/brain size between men and women? # # (2) Any correlations? # ** Now is a good time to explore the seaborn library. # I will show you how to use the examples and apply them to your work ** # + #https://seaborn.pydata.org/generated/seaborn.catplot.html?highlight=catplot#seaborn.catplot sns.catplot(data=df_all, x="Gender", y="PIQ", #palette={"Male": "blue", "Female": "pink"}, # you can also specify colours! kind="strip"); #experiment with different kinds, e.g. “bar”, “strip”, “swarm”, “box”, “violin”, or “boxen”. # + # Lets add another categorical variable to the graph: # We want to add Diagnosed or not to the figure sns.catplot(data=df_all, x="Gender", y="PIQ", hue = "Diagnosed", #palette={"Male": "blue", "Female": "pink"}, # you can also specify colours! kind="bar"); #experiment with different kinds, e.g. “bar”, “strip”, “swarm”, “box”, “violin”, or “boxen”. # + # Lets do some correlations and make a correlation matrix corr = df_all.corr() # - corr # + # Now lets add some colour to help differentiate between variables corr.style.background_gradient()#.set_precision(2) # - # If you want to calculate pearson r then you can use the function below, don't worry about the code, I will show you to implement it # + from scipy.stats import pearsonr import pandas as pd def calculate_pvalues(df): df = df.dropna()._get_numeric_data() dfcols = pd.DataFrame(columns=df.columns) pvalues = dfcols.transpose().join(dfcols, how='outer') for r in df.columns: for c in df.columns: pvalues[r][c] = round(pearsonr(df[r], df[c])[1], 4) return pvalues # - calculate_pvalues(df_all) # + #Exploring scatterplots #https://seaborn.pydata.org/generated/seaborn.scatterplot.html # - sns.scatterplot(x="FSIQ", y="PIQ", data=df_all) # + #plt.figure(figsize=(10,8)) sns.scatterplot(data=df_all, x="FSIQ", y="PIQ", hue="Gender",) #experiment with different categorical 'hues', e.g. Diagnosed #size = "Height") # Explore what the size argument does... # + # Saving your figures for publication # assign the code to a random variable (something like g = ) g = sns.scatterplot(data=df_all, x="FSIQ", y="PIQ", hue="Gender",) #experiment with different categorical 'hues', e.g. Diagnosed #size = "Height") # Explore what the size argument does... # Use the savefig argument and provide a filename, such as figure1.png # You can use keyword arguments based on your journal's submission requirements # For exmaple, for the British Journal of Psychiatry (https://www.cambridge.org/core/services/authors/journals/journals-artwork-guide) # they want images at 300dpi and ideally in TIFF format, so you can save your figure appropriately # So check with your submission guidelines and provide the appropriate arguments # Tip: for posters, you might want to use the transparancy argument #g.figure.savefig("output.tiff", dpi = 300) #transparency = True) # - # # Statistics in Python # ##### IV = Gender (Male, Female), also Diagnoses (Yes, No) # ##### DV = IQ Scores (FSIQ, VIQ, PIQ) # We want to test if there is a difference in IQ measures between genders? Or "diagnosed"? What tests can we use? # + # There are two libraries (that i know of anyway) to do stats in Python - scipy and statsmodels # Lets start with scipy and we will compare the syntax and output with statsmodel # Import as below... import scipy.stats as stats # - # ### Assumption of normality # # #### Shapiro-wilk test (output = w test statistic, p value) # + stats.shapiro(df_all["FSIQ"]) # consider the difference between the above and the below commented code: #stats.shapiro(df_all["FSIQ"][df_all['Gender'] == 'Male']) # + # You can also test other DVs print(stats.shapiro(df_all["PIQ"])) print(stats.shapiro(df_all["VIQ"])) # + # Now lets make some Q-Q Plots stats.probplot(df_all["FSIQ"],plot= plt) # Give your figure a title plt.title("FSIQ Q-Q Plot"); # + # Lets make a loop and test all three of our IQ DVs in one go: # First make a list variable called cols with your DVs in it cols = ['FSIQ', 'PIQ', 'VIQ'] # Then make a loop using the for statement. This will loop through every item in cols # and do you tell it to do, in this case, we are telling it to do stats.shapiro for i in cols: print(i) print(stats.shapiro(df_all[i][df_all['Gender'] == 'Male'])) for i in cols: stats.probplot(df_all[i][df_all['Gender'] == 'Male'], plot= plt) plt.title("Mental Health Q-Q Plot") print("\n\nAssumption of normality is violated as (all) the p-values are < than 0.05.") # - # ##### Levene's Test levene_1 = df_all["PIQ"][df_all['Gender'] == 'Male'] levene_2 = df_all["PIQ"][df_all['Gender'] == 'Female'] stats.levene(levene_1, levene_2) # + # Lets make a quick loop so we can run a levene's test on all our DVs for i in cols: print(i , ':' , stats.levene(df_all[i][df_all['Gender'] == 'Male'], df_all[i][df_all['Gender'] == 'Female'])) # - # ##### ANOVA # + # We will statsmodels because the output is better (more readable) import statsmodels.api as sm from statsmodels.formula.api import ols # Documentation here: https://www.statsmodels.org/stable/index.html # - results = ols("FSIQ ~ C(Gender)", data = df_all).fit() results.summary() print("The adfsaf asdf , F(%f, %f) = %f , p = %f" %(results.df_resid, results.df_model, results.fvalue, results.f_pvalue)) # + # Consider writing a function that writes out your results section for you! # You need to find the following bits of info from the above... # F(df effect, df error) = F-value, MSE = mean-square error, p-value". # e.g., "IQ scores did/didn't differ significantly between genders, F(X,XX) = XXXX, MSE = XXX, p = XXX. # You can find each of those individual bits of data by typing results. then hit tab and # you will be able to see all the methods that the results object contains! # For instance: #print(results.df_model) #print(results.df_resid) #print(results.fvalue) #print(results.f_pvalue) #print(results.mse_total) # Now you jsut need to put all that together... # - def res_output(dv, iv, df): # So first make two variables that represent the IV and DV x = ("~ C(%s)" %iv) y = str(dv + x) # Then make the model. results = ols(y, data=df).fit() # Then make a statement which prints out an appropriate statement # based on the p-value... if results.f_pvalue > 0.05: print("A one-way ANOVA was conducted to compare difference in %s between %s. We found no significant difference between %s,\ F(%f, %f) = %f , p = %f" %(dv, iv, iv, results.df_model, results.df_resid, results.fvalue, results.f_pvalue)) else: print("A one-way ANOVA was conducted to compare difference in %s between %s. We found a significant difference between %s\ F(%f, %f) = %f , p = %f" %(dv, iv, iv, results.df_model, results.df_resid, results.fvalue, results.f_pvalue)) # If you fancy, you can tell the function to return an output such as the # summary table, if so, uncomment the bottom bit below... #return results.summary() # + # Now you can call your function and give it your arguments like below... res_output(dv = "VIQ", iv = "Gender", df = df_all) # - # # Machine Learning - Would you survive the titanic?! # To learn the concept of machine learning (ML), lets predict if YOU would survive the titanic based on the passenger and survival history. # # Python has a powerful library called sci-kit learn (sklearn) which does everything for you! # + #Load up the titanic3.xls into a pandas dataframe titanic_df = pd.read_excel('titanic3.xls', 'titanic3', na_values=['NA']) # - # The columns refer to: # # - survival - Survival (0 = No; 1 = Yes) # - pclass - Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) # - name - Name # - sex - Sex # - age - Age # - sibsp - Number of Siblings/Spouses Aboard # - parch - Number of Parents/Children Aboard # - ticket - Ticket Number # - fare - Passenger Fare # - cabin - Cabin # - embarked - Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) # - boat - Lifeboat (if survived) # - body - Body number (if did not survive and body was recovered) # - home.dest - home destination titanic_df.head() # Firstly lets explore the dataset, # # What was the survival rate? # Can you make a table of gender vs. class and survival. # Can you plot what you are interested in. titanic_df['survived'].mean()*100 titanic_df.groupby(['sex', 'pclass']).mean()['survived']*100 sns.factorplot(data = titanic_df, x = 'sibsp', y = 'parch', aspect = 2) # What features (aka columns) do you think will be important for us to predict survival? # + # We can drop some columns.. titanic_df = titanic_df.drop(['body','cabin','boat', 'home.dest' , 'embarked'], axis=1) # + # lets clean the data in prep for ML... # ML models works only with numbers, can anyone see an issue with the dataset? titanic_df['sex'] = titanic_df['sex'].map({'female': 1, 'male': 0}) # - titanic_df.head(2) # + # ML also doesn't work with missing values, so we can impute them... # for this demonstration, I'm just going to get rid of empty data titanic_df.dropna(inplace=True) # + #lets see if we have any missing rows... titanic_df[titanic_df.isnull().any(axis=1)] # - # Now lets create our features and response variables (X and y) # + # store the feature matrix (X) and response vector (y) X = titanic_df[['pclass', 'sex', 'age', 'sibsp', 'parch', 'fare']] y = titanic_df['survived'] # + # check the shapes of X and y to make sure the number of rows match up # why is this important? print(X.shape) print(y.shape) # - # lets look at our features.. can anyone see what's wrong here? X.head() # lets look at y y # ### 4 steps to ML in Python... IIFP # # #### (1) Import # #### (2) Instantiate # #### (3) Fit # #### (4) Predict # + # (1) import the class, in this case we will use k-neighbours from sklearn.neighbors import KNeighborsClassifier # + # (2) instantiate the model (with the default parameters) knn = KNeighborsClassifier(n_neighbors=25)#weights = 'uniform') # + # (3) fit the model with data (occurs in-place) knn.fit(X, y) #knn.score(X, y) # - X.columns # + # (4) Predict # Enter your data in the same order of the X columns: # There should be 6 numbers: knn.predict([[2, 0, 32, 0, 1, 32]]) # - # We can see that the model has predicted that the new observation’s class is 0 (i.e. I wouldn't have survived!). # We can even look at the probabilities the model assigned to each class to see how confident it is: knn.predict_proba([[2, 0, 32, 0, 1, 32]]) # According to this result, the model predicted that the observation was dead with a ~52% probability and survive with a ~48% probability. # # Because the observation had a greater probability of being dead, it predicted that class for the observation. # Lets try another model (Random Forest) but you can experiment with many more... # - Random Forest # - Perceptron # - SGDC # - Decision tree # # First (1) import from sklearn.ensemble import RandomForestClassifier #from sklearn.linear_model import Perceptron #from sklearn.linear_model import SGDClassifier #from sklearn.tree import DecisionTreeClassifier # Then (2) instantiate random_forest = RandomForestClassifier(n_estimators=100) # + # Then (3) fit random_forest.fit(X, y); # + # Then (4) predict random_forest.predict([[2, 0, 32, 0, 1, 35.23]]) # - random_forest.predict_proba([[2, 0, 32, 0, 1, 35.23]]) # Lets see which of our features were the most important to this model: # + # Make a dataframe importances = pd.DataFrame({'feature':X.columns, 'importance': random_forest.feature_importances_}) # Sort the value, set the index importances = importances.sort_values('importance',ascending=False).set_index('feature') # Display the first 15 rows of the dataframe importances.head(15) # - 0.296 + 0.286 + 0.258 + 0.083 + 0.041 +0.036 importances.importance.sum() importances.plot.bar() random_forest.score(X, y)
24,119
/06_DL_data.ipynb
072bb14cd51d656533a7ebeca7d2a62ce6219133
[]
no_license
zephyrland/MSG_project
https://github.com/zephyrland/MSG_project
1
0
null
2022-12-16T04:35:57
2022-03-07T06:26:12
null
Jupyter Notebook
false
false
.py
7,160
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data pre-processing for u-net model # In this notebook we prepare the data that is going to be ingested by the u-net model. A total of 3 features are going to be used: HRV_norm, IR_108 and channel differences (WV_062-IR_108). The IR_108 contains information about the cloud top height, the channel differences contain information about the water content of the cloud and the HRV provides information about the structure of the cloud. Theoretically, one of the main advantages of using the u-net model is that the model can learn about the spatial structure of these variables import glob import os import pyart import numpy as np import pandas as pd from copy import deepcopy # suppress anoying iypthon warnings. Not ideal since we suppress also potentially relevant warnings import warnings warnings.filterwarnings('ignore') # ## Auxiliary functions # Function to read original dataset # data is stored as (nz, ny, nx), we return (nx, ny) def read_nc(fname): sat_grid = pyart.io.read_grid(fname) for field_name in sat_grid.fields.keys(): data = np.transpose(np.squeeze(sat_grid.fields[field_name]['data'])) return data # Function for minmax scaling def minmax_scaling(data, vmin, vmax): data2 = deepcopy(data) data2[data2>vmax] = vmax data2[data2<vmin] = vmin return (data2-vmin)/(vmax-vmin) # ## Some global variables # + fbasepath = '/data/pyrad_products/MSG_ML/' features = ['HRV_norm', 'IR_108', 'WV_062-IR_108'] nfeatures = len(features) target = 'POH90' vmins = [0., 200., -78.] vmaxs = [100., 311., 9.] # - # We use minmax normalization to put all variables within the 0-1 range. The min, max values for each variable have been obtained from the EDA. The features matrix has shape nx, ny, n channels (HRV_norm, IR_108 and WV_062-IR_108). The target matrix has shape nx, ny, n classes. The classes are no hail (0) and hail (1). We transform the POH90 to 0 (no hail or not computed) and 1 probabilty of hail above 90%). The shape of those matrices is the one required by the u-net as implemented in the unet package. years = ['2018', '2019', '2020'] months = ['04', '05', '06', '07', '08', '09'] for year in years: for month in months: # Get list of files and data size flist = glob.glob(fbasepath+'*/NETCDF/'+features[0]+'/'+year+month+'*.nc') if len(flist) == 0: continue flist.sort() img_size = read_nc(flist[0]).shape data_size = img_size[0]*img_size[1] for fname in flist: # Get time step bfile = os.path.basename(fname) dt_str = bfile[0:14] print(dt_str, end="\r", flush=True) # Read all files corresponding to a time step # Put them in features and target matrices X = np.empty((img_size[0], img_size[1], nfeatures), dtype=np.float32) for i, (vmin, vmax, feature) in enumerate(zip(vmins, vmaxs, features)): flist_aux = glob.glob(fbasepath+'*/NETCDF/'+feature+'/'+dt_str+'*.nc') data = read_nc(flist_aux[0]) data = minmax_scaling(data, vmin, vmax) X[:, :, i] = data flist_aux = glob.glob(fbasepath+'*/NETCDF/'+target+'/'+dt_str+'*.nc') y = read_nc(flist_aux[0]) # Only hail/no hail y[y == 1] = 0 y[y == 2] = 1 # onehot encoding y_onehot = np.eye(2)[y] # Save data into a .npz file np.savez('/data/ml_course/05_Capstone_project/dl_data/'+dt_str+'_data.npz', features=X, targets=y_onehot)
3,950
/selenium/selenium-practice.ipynb
d5ec11c0b76dd43c113afc1e8785ad1497774146
[ "MIT" ]
permissive
SaudQadir/data-practice
https://github.com/SaudQadir/data-practice
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
18,324
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- driver.get("https://consumer.huawei.com/en/community/new-year/?userId=125588297&fbclid=IwAR2AD-mJOvTsA-aXQ3mbIELa7HhgH0GKQJgcyDC4pvMxwu_XHKBnIwQ_DzQ") driver.execute_script("window.open('about:blank', 'tab2');") driver.switch_to.window("tab2") driver.get("https://temp-mail.org/") #driver.find_element_by_id("click-to-change").click() time.sleep(2) driver.find_element_by_id("click-to-copy").click() email = clipboard.paste() passcode = email[:-1] driver.execute_script("window.open('about:blank', 'tab3');") driver.get("https://id5.cloud.huawei.com/CAS/portal/userRegister/regbyemail.html?reqClientType=90&loginChannel=90000300&countryCode=Global&lang=en-us&loginUrl=https%3A%2F%2Fid5.cloud.huawei.com%2FCAS%2Fportal%2FloginAuth.html&service=https%3A%2F%2Foauth-login5.cloud.huawei.com%2Foauth2%2Fv2%2Fauthorize%3Fresponse_type%3Dcode%26client_id%3D100596125%26redirect_uri%3Dhttps%253A%252F%252Fconsumer.huawei.com%252Fen%252Fcommunity%252Flogin%252F%26scope%3Dhttps%253A%252F%252Fwww.huawei.com%252Fauth%252Faccount%252Fbase.profile%26display%3Dpage&fbclid=IwAR2nTfctPbjx98y58VP8jVooDopPv6XOZAlkQpL0t4ZH75odib6bdbSkuKA""https://id5.cloud.huawei.com/CAS/portal/userRegister/regbyemail.html?reqClientType=90&loginChannel=90000300&countryCode=Global&lang=en-us&loginUrl=https%3A%2F%2Fid5.cloud.huawei.com%2FCAS%2Fportal%2FloginAuth.html&service=https%3A%2F%2Foauth-login5.cloud.huawei.com%2Foauth2%2Fv2%2Fauthorize%3Fresponse_type%3Dcode%26client_id%3D100596125%26redirect_uri%3Dhttps%253A%252F%252Fconsumer.huawei.com%252Fen%252Fcommunity%252Flogin%252F%26scope%3Dhttps%253A%252F%252Fwww.huawei.com%252Fauth%252Faccount%252Fbase.profile%26display%3Dpage&fbclid=IwAR2nTfctPbjx98y58VP8jVooDopPv6XOZAlkQpL0t4ZH75odib6bdbSkuKA") username = driver.find_element_by_id("username") password = driver.find_element_by_id("password") confirmpwd = driver.find_element_by_id("confirmPwd") emailcode = driver.find_element_by_id("authCode") username.send_keys(email) password.send_keys(passcode) confirmpwd.send_keys(passcode) driver.find_element_by_id("getValiCode").click() driver.switch_to.window("tab2") driver.find_element_by_link_text("Verify Your Identity With Email Verification Code").click() code = driver.find_element_by_xpath("/html/body/main/div[1]/div/div[3]/div[2]/div/div[1]/div/div[2]/div[3]/table[1]/tbody/tr[3]/td/span/span/span/b").text driver.switch_to.window("tab3") emailcode.send_keys(code) driver.find_element_by_id("hwid-btnSubmit").click() driver.find_element_by_xpath("/html/body/div[1]/div[2]/div/div[1]/div[2]/div/div[6]/div[1]/div[3]/div[2]/div").click() driver.find_element_by_xpath("/html/body/div[1]/div[2]/div/div[1]/div[2]/div/div[6]/div[1]/div[3]/div[1]/div").click() driver.find_element_by_xpath("/html/body/div[8]/div[2]/div[3]/div[2]/a[2]").click() driver.find_element_by_xpath("/html/body/div[8]/div[2]/div[4]/a").click() driver.execute_script("window.open('about:blank', 'tab4');") driver.switch_to.window("tab4") driver.get("https://consumer.huawei.com/en/community/new-year/?userId=125588297&fbclid=IwAR2AD-mJOvTsA-aXQ3mbIELa7HhgH0GKQJgcyDC4pvMxwu_XHKBnIwQ_DzQ") # + from selenium import webdriver from selenium.webdriver import Edge from selenium.webdriver import DesiredCapabilities from selenium.webdriver.common.by import By import clipboard import time capabilities = DesiredCapabilities.EDGE capabilities['ms:inPrivate'] = True driver = Edge(capabilities=capabilities) driver.execute_script("window.open('about:blank', 'tab2');") driver.execute_script("window.open('about:blank', 'tab3');") driver.execute_script("window.open('about:blank', 'tab4');") driver.switch_to.window("tab2") driver.get("https://temp-mail.org/") # + driver.switch_to.window("tab2") driver.find_element_by_id("click-to-delete").click() time.sleep(10) driver.find_element_by_id("click-to-copy").click() email = clipboard.paste() passcode = email[:-1] driver.switch_to.window("tab3") driver.get("https://id5.cloud.huawei.com/CAS/portal/userRegister/regbyemail.html?reqClientType=90&loginChannel=90000300&countryCode=Global&lang=en-us&loginUrl=https%3A%2F%2Fid5.cloud.huawei.com%2FCAS%2Fportal%2FloginAuth.html&service=https%3A%2F%2Foauth-login5.cloud.huawei.com%2Foauth2%2Fv2%2Fauthorize%3Fresponse_type%3Dcode%26client_id%3D100596125%26redirect_uri%3Dhttps%253A%252F%252Fconsumer.huawei.com%252Fen%252Fcommunity%252Flogin%252F%26scope%3Dhttps%253A%252F%252Fwww.huawei.com%252Fauth%252Faccount%252Fbase.profile%26display%3Dpage&fbclid=IwAR2nTfctPbjx98y58VP8jVooDopPv6XOZAlkQpL0t4ZH75odib6bdbSkuKA""https://id5.cloud.huawei.com/CAS/portal/userRegister/regbyemail.html?reqClientType=90&loginChannel=90000300&countryCode=Global&lang=en-us&loginUrl=https%3A%2F%2Fid5.cloud.huawei.com%2FCAS%2Fportal%2FloginAuth.html&service=https%3A%2F%2Foauth-login5.cloud.huawei.com%2Foauth2%2Fv2%2Fauthorize%3Fresponse_type%3Dcode%26client_id%3D100596125%26redirect_uri%3Dhttps%253A%252F%252Fconsumer.huawei.com%252Fen%252Fcommunity%252Flogin%252F%26scope%3Dhttps%253A%252F%252Fwww.huawei.com%252Fauth%252Faccount%252Fbase.profile%26display%3Dpage&fbclid=IwAR2nTfctPbjx98y58VP8jVooDopPv6XOZAlkQpL0t4ZH75odib6bdbSkuKA") username = driver.find_element_by_id("username") password = driver.find_element_by_id("password") confirmpwd = driver.find_element_by_id("confirmPwd") emailcode = driver.find_element_by_id("authCode") username.send_keys(email) password.send_keys(passcode) confirmpwd.send_keys(passcode) driver.find_element_by_id("getValiCode").click() time.sleep(17) driver.switch_to.window("tab2") time.sleep(3) # driver.find_element_by_id("click-to-refresh").click() # time.sleep(2) # driver.find_element_by_link_text("Verify Your Identity With Email Verification Code").click() # time.sleep(1) # code = driver.find_element_by_link_text("/html/body/main/div[1]/div/div[3]/div[2]/div/div[1]/div/div[2]/div[3]/table[1]/tbody/tr[3]/td/span/span/span/b").text # driver.switch_to.window("tab3") # emailcode.send_keys(code) # time.sleep(2) # driver.find_element_by_id("hwid-btnSubmit").click() # time.sleep(2) # for i in range(0,1000): # while True: # try: # driver.find_element_by_xpath("/html/body/div[1]/div[2]/div/div[1]/div[2]/div/div[6]/div[1]/div[3]/div[2]/div").click() # except SomeSpecificException: # continue # break # time.sleep(2) # for i in range(0,1000): # while True: # try: # driver.find_element_by_xpath("/html/body/div[1]/div[2]/div/div[1]/div[2]/div/div[6]/div[1]/div[3]/div[2]/div").click() # except SomeSpecificException: # continue # break # time.sleep(2) # for i in range(0,1000): # while True: # try: # driver.find_element_by_xpath("/html/body/div[8]/div[2]/div[3]/div[2]/a[2]").click() # except SomeSpecificException: # continue # break # time.sleep(3) # for i in range(0,1000): # while True: # try: # driver.find_element_by_xpath("/html/body/div[8]/div[2]/div[4]/a").click() # except SomeSpecificException: # continue # break # driver.switch_to.window("tab4") # driver.get("https://consumer.huawei.com/en/community/new-year/?userId=125606597&fbclid=IwAR1HG6-GL_SgYTLme-t8pEffUvDlqdS7HPxeecExSsHW4WrLreopNwT4BlY") # + for i in range(0,100): while True: try: driver.find_element_by_xpath("/html/body/div[1]/div[2]/div/div[1]/div[2]/div/div[6]/div[1]/div[3]/div[2]/div").click() except SomeSpecificException: continue break for i in range(0,100): while True: try: driver.find_element_by_xpath("/html/body/div[1]/div[2]/div/div[1]/div[2]/div/div[6]/div[1]/div[3]/div[2]/div").click() except SomeSpecificException: continue break for i in range(0,100): while True: try: driver.find_element_by_xpath("/html/body/div[8]/div[2]/div[3]/div[2]/a[2]").click() except SomeSpecificException: continue break for i in range(0,100): while True: try: driver.find_element_by_xpath("/html/body/div[8]/div[2]/div[4]/a").click() except SomeSpecificException: continue break driver.switch_to.window("tab4") driver.get("https://consumer.huawei.com/en/community/new-year/?userId=125588297&fbclid=IwAR2AD-mJOvTsA-aXQ3mbIELa7HhgH0GKQJgcyDC4pvMxwu_XHKBnIwQ_DzQ")
8,676
/4 - 30 Days of ML Competition/notebook7f0a0032d7.ipynb
07540ab4bb1ee4eb26316b2138ae8fbeb61e6e8e
[]
no_license
THEKINGSTAR/Kaggle-30-Days-of-ML
https://github.com/THEKINGSTAR/Kaggle-30-Days-of-ML
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
69,928
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''base'': conda)' # name: python385jvsc74a57bd01a68b518e5f681b8e0fe24dde6a797542c00b92f750848d58e34f579234a41ba # --- import numpy as np import math import matplotlib.pyplot as plt def func(t, u, derivadas): resultado = [] for i in derivadas: resultado.append(i(t, u)) return resultado def Euler_exp(derivadas, vIniciais, h, intervalo): n = int((intervalo[1]-intervalo[0])/h) x = np.linspace(intervalo[0], intervalo[1], n) aprox = [] aprox.append(vIniciais) for i in range(0, n): resultado = [] derResult = func(x[i], aprox[i], derivadas) for j in range(0, len(derResult)): resultado.append(aprox[i][j] + h*derResult[j]) aprox.append(resultado) return np.array(aprox).T # + def rungeKuttaMod(derivadas, vIniciais, h, intervalo): n = int((intervalo[1]-intervalo[0])/h) x = np.linspace(intervalo[0], intervalo[1], n) aprox = [] aprox.append(vIniciais) for i in range(0, n): k1 = [] k2 = [] resultados = [] k1 = func(x[i], aprox[i], derivadas) for j in range(0, len(k1)): k1[j] = aprox[i][j] + (1/2)*h*k1[j] k2 = func(x[i]+(h/2), k1, derivadas) for j in range(0, len(k2)): resultados.append(k2[j]*h+aprox[i][j]) aprox.append(resultados) return np.array(aprox).T # - def rungeKuttaMelhorado(derivadas, vIniciais, h, intervalo): n = int((intervalo[1]-intervalo[0])/h) x = np.linspace(intervalo[0], intervalo[1], n) aprox = [] aprox.append(vIniciais) for i in range(0, n): k1 = [] k2 = [] aux = [] resultados = [] k1 = func(x[i], aprox[i], derivadas) for j in range(0, len(k1)): aux.append(aprox[i][j] + h*k1[j]) k2 = func(x[i]+h, aux, derivadas) for j in range(0, len(k2)): resultados.append((k1[j]+k2[j])*(h/2) + aprox[i][j]) aprox.append(resultados) return np.array(aprox).T # + dU1 = lambda t, u: 3*u[0] + 2*u[1] - (2*t**2 + 1) * math.e**t dU2 = lambda t, u: 4*u[0] + 2*u[1] + (t**2 + 2*t -4) * math.e**(2*t) u1 = lambda t: (1/3)*math.e**(5*t) - (1/3)*math.e**(-t) + math.e**(2*t) u2 = lambda t: (1/3)*math.e**(5*t) - (1/3)*math.e**(-t) + t**(2) * math.e**(2*t) derivadas = [dU1, dU2] vIniciais = [1, 1] r = Euler_exp(derivadas, vIniciais, 0.2, [0.0, 1.0]) r2 = [] r3 = [] x = np.linspace(0.0, 1.0, 6) for i in x: r2.append(u1(i)) for i in x: r3.append(u2(i)) plt.plot(x, r[0], label='U1-Aprox') plt.plot(x, r2, label='U1-Esperado') plt.plot(x, r[1], label='U2-Aprox') plt.plot(x, r3, label='U2-Esperado') plt.legend() plt.show() # + dU1 = lambda t, u: -4*u[0] - 2*u[1] + math.cos(t) + 4*math.sin(t) dU2 = lambda t, u: 3*u[0] + u[1] - 3*math.sin(t) u1 = lambda t: 2*math.e**(-t) - 2*math.e**(-2*t) + math.sin(t) u2 = lambda t: (-3)*math.e**(-t) + (2)*math.e**(-2*t) derivadas = [dU1, dU2] vIniciais = [0, -1] r = rungeKuttaMelhorado(derivadas, vIniciais, 0.1, [0.0, 2.0]) r2 = [] r3 = [] x = np.linspace(0.0, 2.0, 21) for i in x: r2.append(u1(i)) for i in x: r3.append(u2(i)) plt.plot(x, r[0], label='U1-Aprox') plt.plot(x, r2, label='U1-Esperado') plt.plot(x, r[1], label='U2-Aprox') plt.plot(x, r3, label='U2-Esperado') plt.legend() plt.show() ermill={"duration": 0.039031, "end_time": "2021-08-22T21:19:02.407623", "exception": false, "start_time": "2021-08-22T21:19:02.368592", "status": "completed"} sample_submission.head() # + papermill={"duration": 0.034326, "end_time": "2021-08-22T21:19:02.463846", "exception": false, "start_time": "2021-08-22T21:19:02.429520", "status": "completed"} train.columns # + papermill={"duration": 0.352685, "end_time": "2021-08-22T21:19:02.839401", "exception": false, "start_time": "2021-08-22T21:19:02.486716", "status": "completed"} train.info() # + papermill={"duration": 0.31264, "end_time": "2021-08-22T21:19:03.174934", "exception": false, "start_time": "2021-08-22T21:19:02.862294", "status": "completed"} print("*"*50) train.describe() # + papermill={"duration": 0.058487, "end_time": "2021-08-22T21:19:03.257683", "exception": false, "start_time": "2021-08-22T21:19:03.199196", "status": "completed"} print("*"*50) train.head() # + papermill={"duration": 0.409582, "end_time": "2021-08-22T21:19:03.690884", "exception": false, "start_time": "2021-08-22T21:19:03.281302", "status": "completed"} test.info() test.describe() test.head() # + papermill={"duration": 0.090043, "end_time": "2021-08-22T21:19:03.806145", "exception": false, "start_time": "2021-08-22T21:19:03.716102", "status": "completed"} """ The next code cell separates the target (which we assign to y) from the training features (which we assign to features). """ # Separate target from features y = train['target'] features = train.drop(['target'], axis=1) # Preview features features.head() # + [markdown] papermill={"duration": 0.025086, "end_time": "2021-08-22T21:19:03.856790", "exception": false, "start_time": "2021-08-22T21:19:03.831704", "status": "completed"} # # **##Step 3: Prepare the data # # Next, we'll need to handle the categorical columns (cat0, cat1, ... cat9). # # In the Categorical Variables lesson in the Intermediate Machine Learning course, # you learned several different ways to encode categorical variables in a dataset. # In this notebook, we'll use ordinal encoding and save our encoded features as new variables X and X_test. # # # + papermill={"duration": 4.167646, "end_time": "2021-08-22T21:19:08.049794", "exception": false, "start_time": "2021-08-22T21:19:03.882148", "status": "completed"} # List of categorical columns object_cols = [col for col in features.columns if 'cat' in col] # ordinal-encode categorical columns X = features.copy() X_test = test.copy() ordinal_encoder = OrdinalEncoder() X[object_cols] = ordinal_encoder.fit_transform(features[object_cols]) X_test[object_cols] = ordinal_encoder.transform(test[object_cols]) # Preview the ordinal-encoded features X.head() # + [markdown] papermill={"duration": 0.02609, "end_time": "2021-08-22T21:19:08.102178", "exception": false, "start_time": "2021-08-22T21:19:08.076088", "status": "completed"} # Next, we break off a validation set from the training data. # # # + papermill={"duration": 0.163948, "end_time": "2021-08-22T21:19:08.292562", "exception": false, "start_time": "2021-08-22T21:19:08.128614", "status": "completed"} X_train, X_valid, y_train, y_valid = train_test_split(X, y, random_state=0) # + [markdown] papermill={"duration": 0.025813, "end_time": "2021-08-22T21:19:08.344671", "exception": false, "start_time": "2021-08-22T21:19:08.318858", "status": "completed"} # # **Step 4: Train a model # # **Now that the data is prepared, the next step is to train a model. # # If you took the Intro to Machine Learning courses, then you learned about Random Forests. In the code cell below, we fit a random forest model to the data. # + papermill={"duration": null, "end_time": null, "exception": false, "start_time": "2021-08-22T21:19:08.371060", "status": "running"} # Define the model model = RandomForestRegressor(random_state=1) ####################################################################################### ################ Train the model (will take about 10 minutes to run)################### ####################################################################################### model.fit(X_train, y_train) preds_valid = model.predict(X_valid) print(mean_squared_error(y_valid, preds_valid, squared=False)) # + papermill={"duration": null, "end_time": null, "exception": null, "start_time": null, "status": "pending"} from xgboost import XGBRegressor my_model = XGBRegressor() my_model.fit(X_train, y_train) # - my_model # + papermill={"duration": null, "end_time": null, "exception": null, "start_time": null, "status": "pending"} from sklearn.metrics import mean_absolute_error predictions = my_model.predict(X_valid) print("Mean Absolute Error: " + str(mean_absolute_error(predictions, y_valid))) # + papermill={"duration": null, "end_time": null, "exception": null, "start_time": null, "status": "pending"} """ my_model = XGBRegressor(n_estimators=500) my_model.fit(X_train, y_train) ########################### my_model = XGBRegressor(n_estimators=500) my_model.fit(X_train,y_train,early_stopping_rounds=5,eval_set=[(X_valid, y_valid)],verbose=False) ########################### my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05) my_model.fit(X_train, y_train,early_stopping_rounds=5,eval_set=[(X_valid, y_valid)],verbose=False) """ # + papermill={"duration": null, "end_time": null, "exception": null, "start_time": null, "status": "pending"} my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05, n_jobs=4) my_model.fit(X_train, y_train,early_stopping_rounds=5,eval_set=[(X_valid, y_valid)],verbose=False) # + [markdown] papermill={"duration": null, "end_time": null, "exception": null, "start_time": null, "status": "pending"} # # **# Step 5: Submit to the competition # # We'll begin by using the trained model to generate predictions, which we'll save to a CSV file. # + papermill={"duration": null, "end_time": null, "exception": null, "start_time": null, "status": "pending"} # Use the model to generate predictions #predictions = model.predict(X_test) predictions =my_model.predict(X_test) # Save the predictions to a CSV file output = pd.DataFrame({'Id': X_test.index,'target': predictions}) output.to_csv('submission.csv', index=False) # - output.shape # + prediction = pd.concat([test['id'], pd.DataFrame(y_valid)], axis=1) # rename columns prediction.columns = ['id', 'target'] prediction.to_csv('prediction.csv', index=False) prediction.shape # - # add id column prediction = pd.concat([test['id'], pd.DataFrame(y_valid)], axis=1) # rename columns prediction.columns = ['id', 'target'] # save as ".csv" prediction.to_csv('ml_30_1st_try.csv', index=False) # print shape (optional) prediction.shape prediction.head() prediction re predictor variables. The partial correlation between y and x3 is the correlation between the variables determined taking into account how both y and x3 are related to x1 and x2. # # Formally, this is relationship is defined as: # # ## $\frac{\text{Covariance}(y, x_3|x_1, x_2)}{\sqrt{\text{Variance}(y|x_1, x_2)\text{Variance}(x_3| x_1, x_2)}}$ # # Check out this [link](http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4463.htm) for full details on this. # We can then plot this relationship: result = plot_pacf(df["Seasonal First Difference"].dropna()) # ### Interpretation # # Typically a sharp drop after lag "k" suggests an AR-k model should be used. If there is a gradual decline, it suggests an MA model. # ### Final Thoughts on Autocorrelation and Partial Autocorrelation # # * Identification of an AR model is often best done with the PACF. # * For an AR model, the theoretical PACF “shuts off” past the order of the model. The phrase “shuts off” means that in theory the partial autocorrelations are equal to 0 beyond that point. Put another way, the number of non-zero partial autocorrelations gives the order of the AR model. By the “order of the model” we mean the most extreme lag of x that is used as a predictor. # # # * Identification of an MA model is often best done with the ACF rather than the PACF. # * For an MA model, the theoretical PACF does not shut off, but instead tapers toward 0 in some manner. A clearer pattern for an MA model is in the ACF. The ACF will have non-zero autocorrelations only at lags involved in the model. # _____ # ### Final ACF and PACF Plots # # We've run quite a few plots, so let's just quickly get our "final" ACF and PACF plots. These are the ones we will be referencing in the rest of the notebook below. # _____ fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(df['Seasonal First Difference'].iloc[13:], lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(df['Seasonal First Difference'].iloc[13:], lags=40, ax=ax2) # ## Using the Seasonal ARIMA model # # Finally we can use our ARIMA model now that we have an understanding of our data! # For non-seasonal data from statsmodels.tsa.arima_model import ARIMA # + # I recommend you glance over this! # help(ARIMA) # - # ### p,d,q parameters # # * p: The number of lag observations included in the model. # * d: The number of times that the raw observations are differenced, also called the degree of differencing. # * q: The size of the moving average window, also called the order of moving average. # We have seasonal data! model = sm.tsa.statespace.SARIMAX(df['Milk in pounds per cow'],order=(0,1,0), seasonal_order=(1,1,1,12)) results = model.fit() print(results.summary()) results.resid.plot() results.resid.plot(kind='kde') # ## Prediction of Future Values # # Firts we can get an idea of how well our model performs by just predicting for values that we actually already know: df['forecast'] = results.predict(start = 150, end= 168, dynamic= True) df[['Milk in pounds per cow','forecast']].plot(figsize=(12,8)) # ### Forecasting # This requires more time periods, so let's create them with pandas onto our original dataframe! df.tail() # + # https://pandas.pydata.org/pandas-docs/stable/timeseries.html # Alternatives # pd.date_range(df.index[-1],periods=12,freq='M') # - from pandas.tseries.offsets import DateOffset future_dates = [df.index[-1] + DateOffset(months=x) for x in range(0,24) ] future_dates future_dates_df = pd.DataFrame(index=future_dates[1:],columns=df.columns) future_df = pd.concat([df,future_dates_df]) future_df.head() future_df.tail() future_df['forecast'] = results.predict(start = 168, end = 188, dynamic= True) future_df[['Milk in pounds per cow', 'forecast']].plot(figsize=(12, 8)) # Not bad! Pretty cool in fact! I hope this helped you see the potential for ARIMA models, unfortunately a lot of financial data won't follow this sort of behaviour, in fact it will often follow something indicating brownian motion, what is that you ask? Well head on over to the next video section and we'll find out! # # # Great Job!
14,477
/delphi_analysis/build_set_preprocessing.ipynb
01dbbc6ea97ba0f71c982c24e8bb24ffdba29f25
[]
no_license
thongonary/CMS_Deep_Learning
https://github.com/thongonary/CMS_Deep_Learning
2
0
null
2018-03-15T21:13:40
2018-03-15T21:13:38
Jupyter Notebook
Jupyter Notebook
false
false
.py
327,970
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Task Day-14(23-06-21) # # Getting and knowing your Data # # There are many kinds of birds: pigeons, ducks, ostriches, penguins… Some are good at flying, others can't fly but run fast. Some swim under water, others wading in shallow pool. # # According to their living environments and living habits, birds are classified into different ecological groups. There are 6 ecological groups of birds: # # 1) Swimming Birds # # 2) Wading Birds # # 3) Terrestrial Birds # # 4) Raptors # # 5) Scansorial Birds # # 6) Singing Birds import pandas as pd birds = pd.read_csv(r"C:\Users\Akshay\OneDrive\Desktop\info\archive\bird.csv") birds # ## See the random 5 entries # birds.sample(5) # ## What is the number of observations in the dataset print(birds.shape[0]) # ## What is the number of columns in the dataset? print(len(birds.columns)) # ## Print the name of all the columns. for x in birds.columns: print(x) # ## What is the name of 10th column? print(birds.columns[9]) # ## What is the type of the observations of the 2th column? print(type(birds[birds.columns[1]])) # ## How is the dataset indexed? print(birds.index) # ## Set the 'type' colum as the index of the dataframe birds.set_index("type", inplace = True) birds # ## Print only the column huml birds["huml"] # ## Print the "huml","humw" columns from SW and P birds[["huml", "humw"]] # ## Select the rows 3 to 7 and the columns 3 to 6 # # print(birds.iloc[3:8][birds.columns[3:7]]) # ## Select rows where df.femw is greater than 5.2 or less than 10.5 birds[(birds["femw"] > 5.2) | (birds["femw"] < 10.5)] # ## Select the third cell down in the column named tarl print(birds["tarl"][2]) # ## Compute how many values are non-missing for each feature over the entire record. print(birds.notnull().sum()) # ## Calculate Mean and Median of "tibl" and "tibw" columns print("Mean") print(birds[["tibl", "tibw"]].mean()) print("Median") print(birds[["tibl", "tibw"]].median()) # ## Discover what is the mean "ulnaw" per every bird category (use Groupby) print(birds.groupby('ulnaw').mean()) # ## Plot graph between all features refuse "type" and "id" columns import matplotlib.pyplot as plt birds[['huml', 'humw', 'ulnal', 'feml', 'femw', 'tibl', 'tibw', 'tarl', 'tarw',]].plot(kind = 'hist') of the data # + id="4FvdGEL573uQ" # Read the Data df = pd.read_csv('uberdrive.csv') # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="rfp142SP73uS" outputId="7c059e9c-86d7-493f-ba8b-0e2b20289efa" # View first 3 rows of data df.head(3) # - print(df) # + id="TYRpS9KWjqDf" df_miles = pd.read_csv('UberDrive_Miles.csv') # + id="ItQdTrU4jqDf" outputId="3c906a61-cfe4-4597-eee9-3bf89bc83206" df_miles.tail(3) # + colab={"base_uri": "https://localhost:8080/", "height": 51} id="oG5qPCEa73uY" outputId="7a36d667-5349-466a-cf7e-a52d42987aae" # understand shape and size of data from Uberdrive print(df.shape) print(df.size) # + id="wkRe1HmjjqDg" outputId="73c8069c-1387-4cc0-8c8d-cf2273ae5169" # check info about data (includes column names, the number of non-null values in it, and data-type for each column.) df.info() print() df_miles.info() # + [markdown] id="Omq73OvI73uf" # 1. PURPOSE column has lots of missing values - **We will talk about handling missing values in the upcoming weeks** # 2. Some of the columns have a 1155 records while there are others with 653, why is that? Lets explore # 3. We see some updates can be made in the column names, lets rename the columns # + [markdown] id="PgNx3U4j73ux" # ### Renaming columns # - # var1 = df.columns.str.extractall var1 # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="j0YgTTz-73ux" outputId="92bc3544-8f0f-4b70-d3e7-156b676c777b" # Approach 1 # Replace the * character from all the columns df.columns = df.columns.str.replace("*", "") # Approach 2 # # You can also rename the specific column names df_miles.rename(columns = {'MILES*':'MILES'}, inplace=True) print(df.columns,"\n", df_miles.columns) # + [markdown] id="okKeivRdjqDi" # **The column names were updated.** # + [markdown] id="Ebt8vpgH73u3" # ### <a id = "link3"></a>Filtering dataframes # #### Using null values # - df.PURPOSE.isnull() # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="AA1Def0u73u3" outputId="06b8ba30-5d85-45d8-8bbb-fcacd907526b" # shows the top 5 entries where PURPOSE is null df[df.PURPOSE.isnull()].head(5) # - df[df.PURPOSE.isnull()] # + [markdown] id="0mGILHvP73u5" # # #### Filtering out records based on conditions # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="x3eNeRib73u5" outputId="2f75e4e2-9c8f-4a10-bc9b-5dfad1b7a96a" # Conditions within dataframe df_miles[df_miles['MILES'] > 30].head() # + [markdown] id="8jSvBLWi73vR" # ## <a id = "link2"></a>In a bid to create the driver profile, lets explore the data parameter wise - # # - 1.Destination - (starting and stopping) # # - 2.Time - (hour of the day, day of week, month of year) # # - 3.Grouping two parameters to get more insights # # - 4.Category and Purpose # + [markdown] id="UAamyoph73vR" # ## 1. Destination # ### Understanding the start and stop points # ###### Through the feature, we will try to understand the below points of the driver profile. # - Name and number of all the unique start and stop points # - Popular start and stop points # - Rides with same start and stop points # - Starting point from which most miles have been driven # - Start- stop pairs that are most travelled in terms of distance # # **Let us handle these one by one** # + [markdown] id="vt3uGICkjqDk" # **1. Name and Number of all unique start and stop points** # + colab={"base_uri": "https://localhost:8080/", "height": 629} id="nmE7R0d473vS" outputId="f313f8be-54c5-42a8-eeb4-9ac3b254befe" # Get the unique starting point, unique destination # names of unique start points print(df['START'].unique()) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="SFUlm5TW73vW" outputId="91b89857-4af6-4c68-b7e4-6a43decca54b" print(df['START'].nunique()) # or use can use the nunique function # + colab={"base_uri": "https://localhost:8080/", "height": 680} id="MNH8DaQ773vY" outputId="5ca7c9ac-266d-4cd8-9016-b9bdc09dd4a7" # Get the names of stopping destinations, unique destinations # Names of unique stopping points print(df['STOP'].unique()) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="3O2iTz9j73va" outputId="08f687f9-d25d-4f82-8393-3118a91c9474" print(len(df['STOP'].unique())) # count of unique stopping points # + [markdown] id="ki30jr-UjqDl" # **2a. Identify popular start points - top 10** # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="IdfGWVrR73vj" outputId="b201847d-212d-40dc-8d23-ffd5df88d4f1" df['START'].value_counts().head(10) # + [markdown] id="tmMxPKP0jqDl" # **2b. Identify popular stop destinations - top 10** # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="0_VAzZd573vl" outputId="44fe8603-afb2-448c-884d-661ff9e150e4" df['STOP'].value_counts().head(10) # + [markdown] id="JVl4EJQCjqDm" # **3. Are there cases where the start and the stop location are the same ?** # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="ax8JSeaj73vn" outputId="eee25698-a344-45da-fd11-7297c3b0219c" df[df['START'] == df['STOP']].head(5) # + id="CWFGmt2djqDm" outputId="55d2d7fc-b8a4-4933-988c-3d5f1084fe1e" df[df['START'] == df['STOP']].shape # + [markdown] id="4Vwn8ov2jqDm" # **288 trips have the same start and stop points** # + [markdown] id="d8F_7jKZjqDm" # **4.Starting point from which the most miles have been driven** # # **In order to use the miles feature, let us now merge the two dataframes so that the all the information is in one dataframe.** # - using merge # # + id="MLBPQNyRjqDm" outputId="4aafb08d-e602-4f49-8f99-5b3c6f7f682b" df = pd.merge(df, df_miles, on = 'Trip_Id', how = 'left') df.head(5) # + [markdown] id="BCSFe-qKjqDn" # **Let's now use groupby function to find the starting point from which the most miles have been driven** # - df.groupby('START')['MILES'].sum().sort_values(ascending = False ).head(10) # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="I1oDU0J-73vr" outputId="3794aca1-8d5a-4e1b-f17c-53155cd90bdd" df2.groupby(['START','STOP'])['MILES'].sum().sort_values(ascending=False).head(10) # + [markdown] id="vZAK9oCujqDn" # **5. Find the top10 start stop pair that have the most miles covered between them ever.** # + [markdown] id="sN8WmJMa73vu" # # Let us drop the unknown locations # df2 = df[df['START'] != 'Unknown Location'] # Makes a new dataframe, which don't have "Unknown Location" as starting point # df2 = df2[df2['STOP'] != 'Unknown Location'] # Further updates the df2 dataframe, by removing "Unknown Location" as stopping point # + colab={"base_uri": "https://localhost:8080/", "height": 300} id="Qzix_S2V73vu" outputId="639a15f5-bdf7-4dbf-8dc7-35a0db4e718e" # Creating a dataframe with the top 10 most miles covered between a start stop pair k3 = df.groupby(['START','STOP'])['MILES'].sum().sort_values(ascending=False).head(10) k3= k3.reset_index() # flatten the dataframe k3['Start-Stop'] = k3['START'] + ' - ' + k3['STOP'] #k3 = df2.groupby(['START','STOP'])['MILES'].sum().sort_values(ascending=False).head(10) #k3= k3.reset_index() # flatten the dataframe #k3['Start-Stop'] = k3['START'] + ' - ' + k3['STOP'] k3 # + [markdown] id="12S-vpsk73v8" # **The most popular start to destination pair is Morrisville-Cary** # + [markdown] id="AEuPLlqK73v8" # <a id = "link4"></a> # ## 2. Start Date - End Date # ### Manipulating date & time objects # #### Lets explore the variables using the below points- # - busiest month in terms of number of drives and miles driven # - busiest day of the week and preferred start hour # - peak hours # # We will create more features for the trip data to be able to cater to above profile mappings # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="aMvHdBEA73v9" outputId="c19995a9-3de5-4626-eb2f-b3e176900b48" df.head(3) # + colab={"base_uri": "https://localhost:8080/", "height": 153} id="x73FrX-C73v_" outputId="7b0313d5-db44-4770-bb5d-328d79608930" df.dtypes # + id="0wu_rDvC73wB" # Create columns by converting the start and end date into a datatime format # You can also over write the same column - but for the sake of understanding the difference in formats, we create new columns df['start_dt'] = pd.to_datetime(df['START_DATE']) df['end_dt'] = pd.to_datetime(df['END_DATE']) # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="mzEZTrdZ73wD" outputId="2b1424da-4c94-42c6-a5b1-4d89bc851c69" df.head() # Print first 5 rows of data. # + colab={"base_uri": "https://localhost:8080/", "height": 187} id="-ViyTibxKhM-" outputId="4a2a4e57-c58a-491d-dc6d-16c083d29cda" df.dtypes # See how the dtype is different # + id="BUjYhgB173wH" # Create more columns by using the inbuilt functionalities of datatime module df['start_day'] = df['start_dt'].dt.day df['start_hour'] = df['start_dt'].dt.hour df['start_month'] = df['start_dt'].dt.month df['d_of_wk'] = df['start_dt'].dt.dayofweek # Days encoded as 0-6 ( monday =0, Tue =1 .... ) # + id="Snw6YSyHjqDp" outputId="efb6e94a-8a76-4544-dd02-3ebafca128bd" df.head().T # - lambda y : datetime.strftime(x,'%a') # + id="ZPsOvrKm73wL" df['weekday'] = df['start_dt'].apply(lambda x : datetime.strftime(x,'%a')) # ( or directly convert into the short form) # + id="gPowYL2DjqDq" outputId="0c090465-c52f-4f6d-aa68-65c4db0cb16b" df.head().T # + id="L-Ehy-L873wM" df['cal_month'] = df['start_dt'].apply(lambda x : datetime.strftime(x,'%b')) # + colab={"base_uri": "https://localhost:8080/", "height": 306} id="iCotg9pf73wQ" outputId="8ae77514-2d3c-4124-e4c6-35adf66c6aee" df.head().T # + [markdown] id="A4oM-qwtjqDq" # **Now let us answer the questions above.** # <br> # <br> # **1. Busiest month in terms of number of drives and miles driven** # + colab={"base_uri": "https://localhost:8080/", "height": 255} id="GRUKjqys73wW" outputId="14007238-5e80-4163-d1d2-4fa15af65a78" #groupby calender months and count the number of drives df.groupby('cal_month').count()['Trip_Id'].sort_values(ascending = False) # + [markdown] id="zOto16I4jqDr" # **December appears to be the busiest month in terms of number of drives** # + id="CFBd5lbWjqDr" outputId="a78272ac-c16b-4bae-f8af-c7a68d948e98" #groupby calender months and count the number of drives df.groupby('cal_month').sum()['MILES'].sort_values(ascending = False) # + [markdown] id="_gZmt-fyjqDr" # **October appears to be the busiest month in terms of miles driven** # + [markdown] id="QtWFspbPjqDs" # **2. Busiest day in terms of number of rides** # + colab={"base_uri": "https://localhost:8080/", "height": 170} id="UHWuwGBw73wb" outputId="d91d9d1e-32b6-4f91-ff68-13642d5ef10b" # Which day did the driver get most drives? df.groupby(['weekday']).size() #note that .count() could also have been used. However, .size() makes it look more clean. # + [markdown] id="myRVUvO073we" # **3. Peak hours ?** # + colab={"base_uri": "https://localhost:8080/", "height": 442} id="rkesjjDV73we" outputId="bb940da3-1050-4c6e-c1ad-6af72e6fa2cf" df.groupby('start_hour').size() # The number of trips started for each hour. # + [markdown] id="iY9t7tHK73wg" # **Looks like the peak hours seem to be between 13PM - 6PM** # + [markdown] id="FuAyy6-_jqDt" # #### For practice - figure out the trips that are starting and ending at the same time (0 minutes elapsed).<br> # The first step is in the cell below. Try to figure out the rest of the steps after this session. # <br> # + id="w9f_7Toz73wp" df['diff'] = (df['end_dt'] - df['start_dt']) # + [markdown] id="KRW20r1S73wt" # This creates a timedelta datatype # + [markdown] id="WLH7zHmN73wv" # Find the date time units in https://docs.scipy.org/doc/numpy/reference/arrays.datetime.html # # search for 'Datetime Units' # + [markdown] id="rw4cx0F873w6" # #### For practice- Exploring existing features to create new ones - Speed # - Open for all of you to explore and figure out what all can be understood and derived from this feature # + [markdown] id="Ll5jdpAf73xA" # ## 4. Category & Purpose # #### Explore the category and the purpose of the trips through # - Most frequent trip category # - Most frequent trip purpose # - Miles driven per category and purpose # - Percent composition of business miles vs personal miles # - time spent per category and purpose # + [markdown] id="kqJ8MwtDjqDu" # **1. Most frequent trip category** # + colab={"base_uri": "https://localhost:8080/", "height": 68} id="gquL3BUv73xA" outputId="a24986ba-fb5c-4d00-ce7f-8882193af7e0" df['CATEGORY'].value_counts() # + [markdown] id="P1avZc0473xD" # **Most trips are in the business category** # + [markdown] id="VHQYJYB5jqDu" # **2. Most frequent Purpose** # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="1i1t_EpJ73xD" outputId="d94f57e6-7ef6-492f-9403-e3a600330a44" #Purpose df['PURPOSE'].value_counts() # + [markdown] id="I4ninmDJ73xG" # **Most trips are for meetings** # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="g8PeGQyn73xH" outputId="dd785412-1a8d-4288-8008-0ecc65eb9633" #Average distance traveled for each activity df.groupby('PURPOSE').mean()['MILES'].sort_values(ascending = False) # + [markdown] id="Or6uXGAv73xI" # **Now lets try to answer some questions from this data.** # # **Question3**: How many miles were driven per category and purpose ? # # **Question4**: What is percentage of business miles vs personal? # # **Question5**: How much time was spend for drives per category and purpose? - <i>for practice: you will need to create a time difference variable - answers will be shared through a notebook</i> # # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="rLz2LvBk73xJ" outputId="a79af4c4-f50a-4768-8978-34426fe41a2a" #Question3: How many miles were driven per category and purpose ? df.groupby('PURPOSE').sum()['MILES'].sort_values(ascending = False) # + colab={"base_uri": "https://localhost:8080/", "height": 85} id="XpIICZl773xK" outputId="fa82427b-a87a-45ea-9a25-45a054d6f105" #Question3: How many miles were driven per category and purpose ? df.groupby('CATEGORY').sum()['MILES'].sort_values(ascending = False) # + colab={"base_uri": "https://localhost:8080/", "height": 142} id="b_snpubb73xM" outputId="bd25cf6e-b90a-453b-cbe6-9dfa871132a5" #Question4: What is percentage of business miles vs personal? df1 = df.groupby('CATEGORY').agg({'MILES':'sum'}) df1 df1.apply(lambda x: x/x.sum()*100).rename(columns = {'MILES':'% of Miles'}) # + [markdown] id="RolqOhtUjqDw" # ## Profile Report - # Through the exercise, we discussed the various aspects the driver profile for the uberdriver data given. The insights received were- # # **Name and number of all the unique start and stop points**<br> # We found the unique start and stop points for the driver. We figured out the localities the driver is active in. # # **Popular start and stop point**<br> # Cary has been the most popular start and stop point # # **Rides with same start and stop points**<br> # 288 such rides were found # # **Starting point from which most miles have been driven**<br> # Unknown location followed by Cary # # **Start- stop pairs that are most travelled in terms of distance**<br> # Morissville - Cary # # **busiest month in terms of number of drives and miles driven**<br> # In terms of no of drives - December # In terms of miles driven - October # # **busiest day of the week**<br> # Friday # # **peak hours**<br> # 1PM - 6PM # # **most frequent trip category**<br> # Business # # **most frequent trip purpose**<br> # Meeting # # **miles driven per category and purpose**<br> # We figured these numbers out. Category wise Business and purpose wise meetings were leading in terms of miles driven # # **percent composition of business miles vs personal miles**<br> # Business - 94% # Personal - 6% # # # ## Summary - # Through this exercise, we tried to check out the data analysis toolkit offered by pandas. We went to explore variables at hand, use groupby, implement datatime manipulation, explored possibility to create new features and various other operations on pandas dataframe. # We also had a sneak peek into the upcoming week's topic of visualization. # # Learners are recommended to explore further on this building on the points discussed in the notebook. # Happy Learning! # + id="fiq-EvubjqDw"
18,885
/basics/Fase 1 - Fundamentos de programacion/Tema 01 - Introduccion informal/Lección 2 (Apuntes) - Textos.ipynb
da79bfcc28fd9df35dd35aa3596255a91c57ec56
[]
no_license
RicardoVeronica/jupyter-notebook
https://github.com/RicardoVeronica/jupyter-notebook
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
18,964
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf # В Tensorflow все представлено в виде графа, где узлы это tf.Operation (операции), а грани tf.Tensor (тензоры). # Операции выполняют расчеты, принимают и дают на выход тензоры. # # [`tf.Session`](https://www.tensorflow.org/api_docs/python/tf/Session) - среда выполнения графа. hello = tf.constant('Hello, world!') # [`tf.constant()`](https://www.tensorflow.org/api_docs/python/tf/constant) создает тензор типа Const. print(hello) # Для исполнения графа требуется tf.Session, который служит средой выполнения. session = tf.Session() session.run(hello) # В `run()` можно передавать несколько тензоров. Это могут быть как tuples так и dicts. session.run({'hello': hello}) # Можно задавать переменную, чье значение будет передано или расчитано потом. # # Для этого используется [`tf.placeholder()`](https://www.tensorflow.org/api_docs/python/tf/placeholder) # # Создаваемое значение должно быть передано в `feed_dict` объекта `Session`. hello_placeholder = tf.placeholder(tf.string) session.run(hello_placeholder, feed_dict={hello_placeholder: 'Hello world'}) # ## Variable # # [Документация по переменным](https://www.tensorflow.org/guide/variables) # # [`tf.Variable`](https://www.tensorflow.org/api_docs/python/tf/Variable) - изменяемый тензор, которые живет вне сессии. # # Перед использованием, переменные требуется инициализировать. # + hello_variable = tf.Variable("Hello, world!", tf.string) session.run(tf.global_variables_initializer()) session.run(hello_variable) # - # Чтобы получить список неинициализированных переменных another_variable = tf.Variable(11, tf.int32) session.run(tf.report_uninitialized_variables()) # Переменные можно индивидуально инициализировать session.run(another_variable.initial_value) # Именованная переменная hello_var_array = tf.get_variable("hello_variable", [1, 2, 3]) session.run(hello_var_array.initial_value) # ## Функции рандома # + import matplotlib.pyplot as plt # %matplotlib inline n = 500000 # - # [`tf.random.uniform()`](https://www.tensorflow.org/api_docs/python/tf/random/uniform) # # Тензор из рандомных значений. По дефолту значения от 0 до 1. # + random_uniform_vec = tf.random_uniform(shape=(n,)) random_uniform_result = session.run(random_uniform_vec) plt.hist(random_uniform_result, 100, (0, 1)); # - # [`tf.random.normal`](https://www.tensorflow.org/api_docs/python/tf/random/normal) # # Тензор нормального распределения. # + random_normal_vec = tf.random_normal(shape=(n,)) random_normal_result = session.run(random_normal_vec) plt.hist(random_normal_result, 100, (-4.2, 4.2)); # - # [`tf.random.truncated_normal`](https://www.tensorflow.org/api_docs/python/tf/random/truncated_normal) # # Тензор с нормальным распределением, но с выкидыванием части значений. # + random_truncated_normal_vec = tf.truncated_normal(shape=(n,)) random_truncated_normal_result = session.run(random_truncated_normal_vec) plt.hist(random_truncated_normal_result, 100, (-4.2, 4.2)); # - # ## Другие # [`tf.zeros`](https://www.tensorflow.org/api_docs/python/tf/zeros) # # Тензор, состоящий из нулей. По дефолту float32. zeros_matrix = tf.zeros([2, 3], tf.int32) session.run(zeros_matrix)
3,451
/.ipynb_checkpoints/Project_2-checkpoint.ipynb
81cf3d1283c078bda3ed37067783aa97bb3f390d
[]
no_license
nleopard/ETL
https://github.com/nleopard/ETL
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
37,614
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Web Stable Diffusion # # This project brings stable diffusion models to web browsers. **Everything runs inside the browser with no server support.** To our knowledge, this is the world’s first stable diffusion completely running on the browser. This notebook is a walk-through on the entire pipeline, including how to import the stable diffusion models from Hugging Face and PyTorch, how to optimize and build the model locally, and how to deploy it to native GPU runtime and WebGPU runtime. Now let’s get started. # ![workflow](site/img/fig/workflow.svg) # ## Install packages # # To import and build the model, we first need to install the on-going development of TVM Unity and other dependencies with the following pip command. # + # !python3 -m pip install --pre torch --upgrade --index-url https://download.pytorch.org/whl/nightly/cpu # !python3 -m pip install -r requirements.txt # has_gpu = !nvidia-smi -L cudav = "-cu116" if has_gpu else "" # check https://mlc.ai/wheels if you have a different CUDA version # !python3 -m pip install mlc-ai-nightly{cudav} -f https://mlc.ai/wheels # - # We import necessary packages and set up the artifact directory. # + from platform import system import tvm from tvm import relax from tvm.relax.frontend.torch import dynamo_capture_subgraphs from tvm.relax.frontend.torch import from_fx from tvm.script import relax as R import torch from torch import fx from web_stable_diffusion import trace from web_stable_diffusion import utils torch_dev_key = "cpu" if system() == "Darwin": target = tvm.target.Target("apple/m1-gpu") device = tvm.metal() else: target = tvm.target.Target("cuda" if has_gpu else "llvm") device = tvm.cuda() if has_gpu else tvm.cpu() # - # !mkdir -p dist # ## Import stable diffusion models # # With necessary packages imported, the first step is to import the stable diffusion PyTorch models into TVM. A standard text-to-image stable diffusion pipeline consists of four stages: # * A text tokenizer which converts the input text prompts to tokens. # * A text encoder (CLIP) which encodes the tokenized text prompts to text embeddings. # * A denoising model (UNet) which denoises a random initial latents for a certain number of steps, guided by the encoded text. # * An image decoder (VAE) which decodes the final latents to an image. # # ![pipeline](site/img/fig/pipeline.svg) # We shall import these models from PyTorch to TVM. As in Web Stable Diffusion we leverage the [wasm port](https://blog.mithrilsecurity.io/porting-tokenizers-to-wasm/) of the Hugging Face [tokenizers library](https://github.com/huggingface/tokenizers), we only need to import the rest three models: CLIP, UNet and VAE. # # PyTorch provides [TorchDynamo](https://pytorch.org/tutorials/intermediate/dynamo_tutorial.html) that can capture the computational graph of a torch model and represent the graph in [Torch FX](https://pytorch.org/docs/stable/fx.html), torch’s graph-level intermediate representation (IR) of a model. TorchDynamo and FX are the tools we leverage. More specifically, we first use TorchDynamo to capture the model’s execution into an FX GraphModule, and then translate the FX GraphModule to a Relax function -- the graph-level IR of TVM Unity. # Compared with building and deploying the stable diffusion model to, for eaxmple, local CUDA backend, building and deploying the model to web with WebGPU runtime is special, mostly because of the difference of runtime environment: # # **Web runtime has no Python (let alone PyTorch, NumPy). We can only leverage JavaScript for model deployment on web.** # # Considering this major difference, we will need to _simplify the runtime stable diffusion pipeline (written in JavaScript) as much as possible_, and maximize the number of tasks in the build stage, ahead of the web model deployment. Our workflow demonstrates this principal in the two following aspects: # 1. capture more computation to TVM’s IRModule, # 2. separate models’ weights from the IRModule. # ### 1. Capture more computation to TVM’s IRModule # # Because we do not have PyTorch in web runtime, it is necessary to cover those additional operations in our IRModule as well. There are two different approaches to the same goal. Both are very simple: # 1. implement the operations manually with existing Relax infrastructure, or # 2. write a wrapper `torch.nn.Module` which both contains the ML model and the appending/prepending operations. # # In our practice, we adopt both approaches. For some operations we use wrapper `nn.Module`, while for others we write the operation manually. Let’s walk through each of them. # #### ①. The CLIP text encoder # # In the entire pipeline, the text encoder is used twice: one for the prompt and the other one for the negative prompt. Since it is next to the tokenization (which we do not import) and is followed by the concatenation of both prompts’ embeddings, the encoder is a standalone phase. Therefore, we use an `nn.Module` to wrap the single CLIP forward. # # Relax has provided a helper function `dynamo_capture_subgraphs`, which uses TorchDynamo to capture the computational graph, and translate the captured FX GraphModule to a Relax function. In stable diffusion, the maximum length of text prompts is 77. So the input to the CLIP model has length 77. We create a random input as the simulation of tokenized text prompt, and pass it to the wrapper for TorchDynamo capture. def clip_to_text_embeddings(pipe) -> tvm.IRModule: # Define the wrapper torch.nn.Module for CLIP. class CLIPModelWrapper(torch.nn.Module): def __init__(self, clip): super().__init__() self.clip = clip def forward(self, text_input_ids): text_embeddings = self.clip(text_input_ids)[0] return text_embeddings clip = pipe.text_encoder clip_to_text_embeddings = CLIPModelWrapper(clip) # Create random input (77 is the maximum length). text_input_ids = torch.rand((1, 77)).to(torch.int32) # Capture CLIP's computational graph. mod = dynamo_capture_subgraphs( clip_to_text_embeddings.forward, text_input_ids, keep_params_as_input=True, ) assert len(mod.functions) == 1 return tvm.IRModule({"clip": mod["subgraph_0"]}) # #### ②. The embedding concatenation # # This stage concatenates the embeddings of both prompts, and is followed by the huge UNet iteration. It is also standalone, and here we choose to implement the concatenation by hand. def concat_embeddings() -> tvm.IRModule: bb = relax.BlockBuilder() cond_embeddings = relax.Var("cond_embeddings", R.Tensor([1, 77, 768], "float32")) uncond_embeddings = relax.Var( "uncond_embeddings", R.Tensor([1, 77, 768], "float32") ) with bb.function("concat_embeddings", [cond_embeddings, uncond_embeddings]): res = bb.emit( relax.op.concat([cond_embeddings, uncond_embeddings], axis=0) ) bb.emit_func_output(res) return bb.get() # #### ③. Latent concat + UNet + classifier-free guidance # # The third stage is the first part of the UNet loop body. It is mostly the UNet forward, while before the UNet forward there is a step of latent concatenation, and after UNet forward, one step of [classifier-free guidance](https://github.com/huggingface/diffusers/blob/79eb3d07d07a2dada172c5958d6fca478c860f16/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L674-L677) will be performed to force the generation to better match the prompt potentially. Since the latent concatenation and the guidance are immediately before/after the UNet forward which we always have to import whatever, we use a wrapper `nn.Module` to import all of them. def unet_latents_to_noise_pred(pipe, device_str: str) -> tvm.IRModule: class UNetModelWrapper(torch.nn.Module): def __init__(self, unet): super().__init__() self.unet = unet # Default guidance scale factor in stable diffusion. self.guidance_scale = 7.5 def forward(self, latents, timestep_tensor, text_embeddings): # Latent concatenation. latent_model_input = torch.cat([latents] * 2, dim=0) # UNet forward. noise_pred = self.unet(latent_model_input, timestep_tensor, text_embeddings) # Classifier-free guidance. noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + self.guidance_scale * ( noise_pred_text - noise_pred_uncond ) return noise_pred unet = utils.get_unet(pipe, device_str) unet_to_noise_pred = UNetModelWrapper(unet) graph = fx.symbolic_trace(unet_to_noise_pred) mod = from_fx( graph, [((1, 4, 64, 64), "float32"), ((), "int32"), ((2, 77, 768), "float32")], keep_params_as_input=True, ) return tvm.IRModule({"unet": mod["main"]}) # #### ④. Scheduler step # # The scheduler step stage is the other part of the UNet iteration, and is very important for updating the latents towards the denoising direction. There are many kinds of different schedulers, with each having (possibly) largely different implementation. Here we use the multi-step DPM solver scheduler. # # One feature of schedulers is that schedulers usually maintain a list of history UNet output, and the scheduler step operation takes the maintained history as input internally. Since the step operation is history dependent, we are not able to combine the scheduler step together with the previous UNet part, and have to implement it separately. Because the scheduler step is also standalone mostly, we implement it by hand. def dpm_solver_multistep_scheduler_steps() -> tvm.IRModule: bb = relax.BlockBuilder() # convert_model_output, the first function in multi-step DPM solver. sample = relax.Var("sample", R.Tensor((1, 4, 64, 64), "float32")) model_output = relax.Var("model_output", R.Tensor((1, 4, 64, 64), "float32")) alpha = relax.Var(f"alpha", R.Tensor((), "float32")) sigma = relax.Var(f"sigma", R.Tensor((), "float32")) with bb.function( "dpm_solver_multistep_scheduler_convert_model_output", [sample, model_output, alpha, sigma], ): converted_model_output = bb.emit( (sample - sigma * model_output) / alpha, "converted_model_output" ) bb.emit_func_output(converted_model_output) # step, the second function. sample = relax.Var("sample", R.Tensor((1, 4, 64, 64), "float32")) model_output = relax.Var("model_output", R.Tensor((1, 4, 64, 64), "float32")) last_model_output = relax.Var( "last_model_output", R.Tensor((1, 4, 64, 64), "float32") ) consts = [relax.Var(f"c{i}", R.Tensor((), "float32")) for i in range(3)] with bb.function( "dpm_solver_multistep_scheduler_step", [sample, model_output, last_model_output, *consts], ): prev_sample = bb.emit( consts[0] * sample - consts[1] * model_output - consts[2] * (model_output - last_model_output), "prev_sample", ) bb.emit_func_output(prev_sample) return bb.get() # #### ⑤. VAE + image normalization # # The last but one stage is the VAE step followed by an image normalization, which normalizes the value range from `[-1, 1]` to integers in `[0, 255]`. For the same reason as ③, we use a wrapper `nn.Module`. def vae_to_image(pipe) -> tvm.IRModule: class VAEModelWrapper(torch.nn.Module): def __init__(self, vae): super().__init__() self.vae = vae def forward(self, latents): # Scale the latents so that it can be decoded by VAE. latents = 1 / 0.18215 * latents # VAE decode z = self.vae.post_quant_conv(latents) image = self.vae.decoder(z) # Image normalization image = (image / 2 + 0.5).clamp(min=0, max=1) image = (image.permute(0, 2, 3, 1) * 255).round() return image vae = pipe.vae vae_to_image = VAEModelWrapper(vae) z = torch.rand((1, 4, 64, 64), dtype=torch.float32) mod = dynamo_capture_subgraphs( vae_to_image.forward, z, keep_params_as_input=True, ) assert len(mod.functions) == 1 return tvm.IRModule({"vae": mod["subgraph_0"]}) # #### ⑥. Image conversion to RGBA # # To display the image, we need to convert the image to RGBA mode that can be directly rendered by the web runtime. This conversion requires dtype `uint32`, which PyTorch doesn’t support. Therefore, we are unable to combine this stage with the previous one, and need to implement it by hand with Relax. def image_to_rgba() -> tvm.IRModule: from tvm import te def f_image_to_rgba(A): def fcompute(y, x): return ( A[0, y, x, 0].astype("uint32") | (A[0, y, x, 1].astype("uint32") << 8) | (A[0, y, x, 2].astype("uint32") << 16) | tvm.tir.const(255 << 24, "uint32") ) return te.compute((512, 512), fcompute, name="image_to_rgba") bb = relax.BlockBuilder() x = relax.Var("x", R.Tensor([1, 512, 512, 3], "float32")) with bb.function("image_to_rgba", [x]): image = bb.emit( bb.call_te(f_image_to_rgba, x, primfunc_name_hint="tir_image_to_rgba") ) bb.emit_func_output(image) return bb.get() # #### Combine every piece together # # We have described how we import every part of the stable diffusion pipeline into Relax. Now we can combine every of them together: # + from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") clip = clip_to_text_embeddings(pipe) unet = unet_latents_to_noise_pred(pipe, torch_dev_key) vae = vae_to_image(pipe) concat_embeddings = concat_embeddings() image_to_rgba = image_to_rgba() schedulers = [ dpm_solver_multistep_scheduler_steps(), trace.PNDMScheduler.scheduler_steps() ] mod: tvm.IRModule = utils.merge_irmodules( clip, unet, vae, concat_embeddings, image_to_rgba, *schedulers, ) # - # ### 2. Separate models’ weights from the IRModule # # After the steps above, we get an IRModule which contains the computational graph of the stable diffusion model, as well as the weights of the model. To reduce the size of the built artifact so that it can be universally deployed everywhere (including the web), we separate models’ weights from the IRModule we get. For a weight tensor, we use a placeholder to represent it in the IRModule, instead of letting it reside in the IRModule as a constant tensor. We will save the separated weights to the disk later. At the beginning of the deployment, we will load these weights from disk to memory. # # The separation is implemented as function `relax.frontend.detach_params`. mod, params = relax.frontend.detach_params(mod) # We can try to print out the entire IRModule via # ```python # mod.show() # ``` # to see the models and other functions we have imported in the IRModule. The output will be thousands of lines long, so we do not run it live here. If you try it out, the printed output should look in the following way: # ```python # # from tvm.script import ir as I # # from tvm.script import relax as R # # @I.ir_module # class Module: # @R.function # def clip( # inp_0: R.Tensor((1, 77), dtype="int32"), # self_clip_text_model_embeddings_position_embedding_weight: R.Tensor((77, 768), dtype="float32"), # self_clip_text_model_embeddings_token_embedding_weight: R.Tensor((49408, 768), dtype="float32"), # ... # ) -> R.Tensor((1, 77, 768), dtype="float32"): # R.func_attr({"num_input": 1}) # with R.dataflow(): # lv: R.Tensor((1, 77), dtype="int32") = R.reshape(inp_0, R.shape([1, 77])) # lv1: R.Tensor((1, 77), dtype="int32") = R.astype(lv, dtype="int32") # lv2: R.Tensor((77,), dtype="int32") = R.reshape(lv1, R.shape([77])) # lv3: R.Tensor((77, 768), dtype="float32") = R.take(self_clip_text_model_embeddings_token_embedding_weight, lv2, axis=0) # lv4: R.Tensor((1, 77, 768), dtype="float32") = R.reshape(lv3, R.shape([1, 77, 768])) # lv5: R.Tensor((1, 77), dtype="int32") = R.astype(metadata["relax.expr.Constant"][0], dtype="int32") # lv6: R.Tensor((77,), dtype="int32") = R.reshape(lv5, R.shape([77])) # lv7: R.Tensor((77, 768), dtype="float32") = R.take(self_clip_text_model_embeddings_position_embedding_weight, lv6, axis=0) # lv8: R.Tensor((1, 77, 768), dtype="float32") = R.reshape(lv7, R.shape([1, 77, 768])) # lv9: R.Tensor((1, 77, 768), dtype="float32") = R.add(lv4, lv8) # ... # ``` # Instead of printing out the whole IRModule, what we can do is to print out the names of the functions. # + def print_relax_funcnames(mod: tvm.IRModule): for global_var, func in mod.functions.items(): if isinstance(func, relax.Function): print(global_var.name_hint) print() print_relax_funcnames(mod) # - # We can also print out one of the weight tensors to see what we have captured for model weights. # Print the first weight parameter of the CLIP model. params["clip"][0] # By now, we have went through all steps of importing the stable diffusion model to Relax. The above logic of import is implemented in [`web_stable_diffusion/trace/model_trace.py`](https://github.com/mlc-ai/web-stable-diffusion/blob/main/web_stable_diffusion/trace/model_trace.py) and [`web_stable_diffusion/trace/scheduler_trace.py`](https://github.com/mlc-ai/web-stable-diffusion/blob/main/web_stable_diffusion/trace/scheduler_trace.py). # ## Optimize the model # # So far, we have imported all the desired models and functions in the stable diffusion into an IRModule of TVM. This section is about how we transform and optimize the IRModule we get. # ### Default optimization pipeline in Relax # # Relax provides a default optimization pipeline for a given IRModule. This pipeline mainly includes the constant folding optimization and the kernel fusion optimization. # Apply the default optimization pipeline. mod = relax.pipeline.get_pipeline()(mod) # After the kernel fusion optimization in the default optimization pipeline, there will be some unused elements in the IRModule. Here we do a clean up on the IRModule. # + model_names = ["clip", "unet", "vae"] scheduler_func_names = [ name for name in trace.DPMSolverMultistepScheduler.scheduler_steps_func_names() ] entry_funcs = ( model_names + scheduler_func_names + ["image_to_rgba", "concat_embeddings"] ) # Clean up unused parts of the IRModule. mod = relax.transform.DeadCodeElimination(entry_funcs)(mod) # - # ### Lift computation on weight parameters # # Since the stable diffusion models’ weight parameter are always constants at the time of deployment, we can finish the computation around the weight parameters in ahead of deployment. To this end, we separate such computation from the original functions to standalone functions in the IRModule, so that such computation can be done in the build stage. The separation is implemented as the `LiftTransformParams` pass. mod = relax.transform.LiftTransformParams()(mod) # We can see that after applying this pass, three new functions are inside the IRModule. Their names are ended with `"_transform_params"`, which means the functions are in the purpose of computing models’ weight parameters and will be built and executed in the build stage. for global_var, function in mod.functions.items(): if isinstance(function, relax.Function): if global_var.name_hint.endswith("_transform_params"): print( global_var.name_hint, f' # <=== This is the weight parameter computation function for "{global_var.name_hint[:-17]}"', ) else: print(global_var.name_hint) # ### Split the IRModule at build stage and deployment stage # # After last subsection, the IRModule now contains functions both for the build stage and deployment stage. Therefore, we need to split the IRModule into two parts. mod_transform, mod_deploy = utils.split_transform_deploy_mod( mod, model_names, entry_funcs ) # Let’s print the Relax function names in the IRModules after split. # + print("In IRModule for build stage:") print_relax_funcnames(mod_transform) print("In IRModule for deployment stage:") print_relax_funcnames(mod_deploy) # - # ### Prepare for build # # We have finished transforming the IRModule. One last step before building the IRModule is to compute and save the all the constants at deployment to our artifact directory. There are two kinds of constants that we need to compute and save: # * the constants of the models’ weight parameters and related computation lifted by pass `LiftTransformParams`, and # * the coefficients used by the stable diffusion schedulers at each UNet iteration. # # Compared with computing them in the deployed model, doing the computation in ahead has several benefits: # * it minimizes the workload of the deployed model, # * we can universally and easily deploy our model on platforms without, e.g., Python environment. # Compute and save the scheduler constants. trace.compute_save_scheduler_consts(artifact_path="dist") # Compute and save the models's weight parameters. new_params = utils.transform_params(mod_transform, params) utils.save_params(new_params, artifact_path="dist") # All transformation and the final build preparation in this section are contained in the `legalize_and_lift_params` function of `build.py`. # ## Build the model # # After all preparation for the build stage, in this section we will build the model. The contents of this section is implemented as the `build` function of `build.py`. To build the model outside this notebook, the command is # ```shell # python3 build.py # ``` # ### Apply provided kernel optimization database # # The IRModule before the build stage contains three kinds of Relax functions: # * Relax functions of ML models (CLIP, UNet and VAE), # * Relax functions of schedulers (scheduler step function), and # * some utility Relax functions (e.g., text embedding concatenation, VAE-output-to-image conversion). # Each of the Relax function calls into many low-level primitive tensor functions. For example, the first scheduler step Relax function of PNDM scheduler is mod_deploy["dpm_solver_multistep_scheduler_step"].show() # As shown above, this scheduler step Relax function calls into two multiplication functions, one division function and one subtraction function. We can print out the multiplication function to see how the primitive tensor functions look like: # + called_gv = mod_deploy["dpm_solver_multistep_scheduler_step"].body.blocks[0].bindings[0].value.args[0] mod_show = tvm.IRModule({called_gv: mod_deploy[called_gv]}) mod_show.show(black_format=False) # - # # To have decently high performance for all such primitive tensor functions, we have provided a pre-tuned database, which contains the optimization for every primitive tensor functions in the IRModule to build. What we need to do here is simply to apply the database to the IRModule, with the help of function `MetaScheduleApplyDatabase`. # + from tvm import meta_schedule as ms db = ms.database.create(work_dir="log_db") with target, db, tvm.transform.PassContext(opt_level=3): mod_deploy = relax.transform.MetaScheduleApplyDatabase()(mod_deploy) # - # After applying the database, the multiplication primitive tensor function shown above now becomes: mod_show = tvm.IRModule({called_gv: mod_deploy[called_gv]}) mod_show.show(black_format=False) # ### Build the model # # With the database applied in the last subsection, every part of the IRModule is ready to be built. We now use `relax.build` to build the IRModule to the desired backend target, e.g., Metal for Mac M1. `relax.build` returns an executable object, which will be taken and executed by the VirtualMachine of Relax. ex = relax.build(mod=mod_deploy, target=target) type(ex) # We export the executable to disk as a shared library. The exported shared library will be loaded back in the deployment stage. ex.export_library("dist/stable_diffusion.so") # The build stage right ends here. Let’s recall a bit: after build, the objects we saved to the disk are: # * the shared library, which is the build artifact of the IRModule to deploy, # * the weight parameter constants, # * the scheduler constants. # # With merely these parts, we are able to bring and deploy our stable diffusion model to anywhere that supports minimum TVM runtime. This well demonstrates our concept of universal deployment. # # Of course, we still need a stable diffusion pipeline script that connects all these parts together and describes the procedure of stable diffusion. We will introduce this in the next section -- deployment. # ## Deploy the model locally # # This section will demo how we deploy the built model locally with native GPU runtime. The contents of this section is implemented in `deploy.py`. To deploy the model outside this notebook, the command is # ```shell # python3 deploy.py # ``` # ### Load the model # # The first job of deployment is to load back the model shared library and the constants we saved to disk. # Load the model weight parameters back. const_params_dict = utils.load_params(artifact_path="dist", device=device) # Load the model executable back from the shared library. ex = tvm.runtime.load_module("dist/stable_diffusion.so") # We create a `relax.VirtualMachine` with the executable object loaded back and the targeted device (Metal/CUDA/CPU). vm = relax.VirtualMachine(rt_mod=ex, device=device) # The virtual machine provides interfaces to call into the Relax functions in `mod_deploy` above, which we will use later. # ### Define the scheduler class # # Similar to scheduler classes in Hugging Face diffusers, we also define the scheduler class for stable diffusion runtime. On construction, the scheduler class will load the scheduler constants from the JSON file we saved to the disk at the build stage. The main interface of the scheduler class is the `step` function, which is invoked at the end of each UNet iteration. # # We implement the PNDM scheduler, which is the default scheduler of [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). # + import json import numpy as np from web_stable_diffusion import runtime class DPMSolverMultistepScheduler(runtime.Scheduler): scheduler_name = "multistep-dpm-solver" def __init__(self, artifact_path: str, device) -> None: # Load the scheduler constants. with open( f"{artifact_path}/scheduler_dpm_solver_multistep_consts.json", "r" ) as file: jsoncontent = file.read() scheduler_consts = json.loads(jsoncontent) def f_convert(data, dtype): return [tvm.nd.array(np.array(t, dtype=dtype), device) for t in data] self.timesteps = f_convert(scheduler_consts["timesteps"], "int32") self.alpha = f_convert(scheduler_consts["alpha"], "float32") self.sigma = f_convert(scheduler_consts["sigma"], "float32") self.c0 = f_convert(scheduler_consts["c0"], "float32") self.c1 = f_convert(scheduler_consts["c1"], "float32") self.c2 = f_convert(scheduler_consts["c2"], "float32") # Initialize the model_output history. self.last_model_output: tvm.nd.NDArray = tvm.nd.empty( (1, 4, 64, 64), "float32", device ) def step( self, vm: relax.VirtualMachine, model_output: tvm.nd.NDArray, sample: tvm.nd.NDArray, counter: int, ) -> tvm.nd.NDArray: # Invoke the functions through VM. model_output = vm["dpm_solver_multistep_scheduler_convert_model_output"]( sample, model_output, self.alpha[counter], self.sigma[counter] ) prev_latents = vm["dpm_solver_multistep_scheduler_step"]( sample, model_output, self.last_model_output, self.c0[counter], self.c1[counter], self.c2[counter], ) self.last_model_output = model_output return prev_latents # - # ### Define the stable diffusion pipeline # # We now define the stable diffusion pipeline that connects everything together. The pipeline takes the virtual machine we build, the Hugging Face tokenizer, a scheduler and the model weight parameter constants for construction. The main interface of the pipeline takes a prompt and an optional negative prompt as input, and returns the generated image as output. # + from PIL import Image from tqdm import tqdm from transformers import CLIPTokenizer class TVMSDPipeline: def __init__( self, vm: relax.VirtualMachine, tokenizer: CLIPTokenizer, scheduler: runtime.Scheduler, tvm_device, param_dict, ): def wrapper(f, params): def wrapped_f(*args): return f(*args, params) return wrapped_f self.vm = vm self.clip_to_text_embeddings = wrapper(vm["clip"], param_dict["clip"]) self.unet_latents_to_noise_pred = wrapper(vm["unet"], param_dict["unet"]) self.vae_to_image = wrapper(vm["vae"], param_dict["vae"]) self.concat_embeddings = vm["concat_embeddings"] self.image_to_rgba = vm["image_to_rgba"] self.tokenizer = tokenizer self.scheduler = scheduler self.tvm_device = tvm_device self.param_dict = param_dict def __call__(self, prompt: str, negative_prompt: str = ""): # The height and width are fixed to 512. # Compute the embeddings for the prompt and negative prompt. list_text_embeddings = [] for text in [negative_prompt, prompt]: text = [text] # Tokenize the text. text_inputs = self.tokenizer( text, padding="max_length", max_length=self.tokenizer.model_max_length, # 77 return_tensors="pt", ) text_input_ids = text_inputs.input_ids.to(torch.int32) # Clip the text if the length exceeds the maximum allowed length. if text_input_ids.shape[-1] > self.tokenizer.model_max_length: text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] # Compute text embeddings. text_input_ids = tvm.nd.array(text_input_ids.cpu().numpy(), self.tvm_device) text_embeddings = self.clip_to_text_embeddings(text_input_ids) list_text_embeddings.append(text_embeddings) # Concatenate the text embeddings. text_embeddings = self.concat_embeddings(*list_text_embeddings) # Randomly initialize the latents. latents = torch.randn( (1, 4, 64, 64), device="cpu", dtype=torch.float32, ) latents = tvm.nd.array(latents.numpy(), self.tvm_device) # UNet iteration. for i in tqdm(range(len(self.scheduler.timesteps))): t = self.scheduler.timesteps[i] noise_pred = self.unet_latents_to_noise_pred(latents, t, text_embeddings) latents = self.scheduler.step(self.vm, noise_pred, latents, i) # VAE decode. image = self.vae_to_image(latents) # Transform generated image to RGBA mode. image = self.image_to_rgba(image) return Image.fromarray(image.numpy().view("uint8").reshape(512, 512, 4)) # - # ### Run # # We are ready to go! Let’s instantiate the pipeline and pass in a real prompt to generate the image. pipe = TVMSDPipeline( vm=vm, tokenizer=CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14"), scheduler=runtime.DPMSolverMultistepScheduler(artifact_path="dist", device=device), tvm_device=device, param_dict=const_params_dict, ) # + import time prompt = "A cat swimming in a lake." start = time.time() image = pipe(prompt) end = time.time() print(f"Time elapsed: {end - start} seconds.") # - # We can use `display` to showcase the generated image. display(image) # ## Deploy on web # # We have tried to deploy the stable diffusion model with native GPU runtime. Now let’s try to deploy the stable diffusion to web end with WebGPU runtime. # # To deploy to web, everything is following the same procedure. Some minor differences are: # * we need a build the model to WebGPU backend, instead of Metal backend, # * we need to implement the stable diffusion pipeline and the scheduler runtime in JavaScript. # # Nevertheless, they both have the same spirit as before -- just in another form. # ### Install prerequisites # # We first install some prerequisite packages. # # 1. [emscripten](https://emscripten.org). It is an LLVM-based compiler which compiles C/C++ source code to WebAssembly. # - Follow the [installation instruction](https://emscripten.org/docs/getting_started/downloads.html#installation-instructions-using-the-emsdk-recommended) to install the latest emsdk. # - Source `emsdk_env.sh` by `source path/to/emsdk_env.sh`, so that `emcc` is reachable from PATH and the command `emcc` works. # 2. [Rust](https://www.rust-lang.org/tools/install). # 3. [`wasm-pack`](https://rustwasm.github.io/wasm-pack/installer/). It helps build Rust-generated WebAssembly, which used for tokenizer in our case here. # 4. Install jekyll by following the [official guides](https://jekyllrb.com/docs/installation/). It is the package we use for website. # 5. Install jekyll-remote-theme by command # ```shell # gem install jekyll-remote-theme # ``` # 6. Install [Chrome Canary](https://www.google.com/chrome/canary/). It is a developer version of Chrome that enables the use of WebGPU. # We can run the following commands to verify that we have installed them properly and can found these programs. # !emcc # !jekyll # !wasm-pack # After verifying the functionality of these commands, we run `scripts/prep_deps.sh` to prepare all the necessary dependencies for the web build. # + language="bash" # export TVM_HOME=3rdparty/tvm # ./scripts/prep_deps.sh # - # The execution `scripts/prep_deps.sh` is expected to end with # ``` # Copy /path/to/tvm/web/dist/tvmjs.bundle.js to dist # Copy /path/to/tvm/web/dist/wasm/tvmjs_runtime.wasi.js to dist # ``` # ### Build the model to WebGPU backend # # We now build the model to WebGPU backend and export the executable to disk in the WebAssembly file format. # # In the previous section on building the model to native GPU backend, the command was # ```shell # python3 build.py # ``` # To build to WebGPU backend, we still use `build.py`, just with an argument specifying the target: # + language="bash" # export TVM_HOME=3rdparty/tvm # python3 build.py --target webgpu # - # ### Set up the website # # The last thing to do is setting up the site by running the following command in a terminal session: # ```shell # ./scripts/local_deploy_site.sh # ``` # # Once the website is set up, open `localhost:8888/` in Chrome Canary to try out the demo on your local machine! # # --- # # _Remark: don’t forget to use_ # ```shell # /Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary --enable-dawn-features=disable_robustness # ``` # _to launch Chrome Canary to turn off the robustness check from Chrome._
35,955
/3.SimpleMusicPlayer.ipynb
20291cfc625eee0caf35959dcf51cec14d3cbb76
[]
no_license
adis300/py-music
https://github.com/adis300/py-music
2
0
null
null
null
null
Jupyter Notebook
false
false
.py
73,214
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # ### 난수 발생과 카운팅 import numpy as np # ### 시드 설정하기 np.random.rand(5) np.random.rand(5) np.random.seed(2020) np.random.rand(5) np.random.seed(2020) np.random.rand(5) # #### 데이터의 순서 바꾸기 x = np.arange(10) x np.random.shuffle(x) x # #### 데이터 샘플링 np.random.choice(5, 5, replace=False) # shuffle 명령과 같다. np.random.choice(5, 5, replace=True) np.random.choice(5, 3, replace=False) # 3개만 선택 np.random.choice(5, 10, p=[0.1, 0, 0.3, 0.6, 0]) # 선택 확률을 다르게 해서 10개 선택 # #### 난수 생성 np.random.rand(5) np.random.randn(10) np.random.randint(10, size=10) np.random.randint(10, 20, size=10) x = np.random.randn(250) * 100 + 10000 x[:10] price = 10000 tomorrow = price * (1 + np.random.randn(1)/100.) tomorrow[0] # 연습문제 3.5.2 prices = [] price = 10000 for i in range(250): tmp = price * (1 + np.random.randn(1)/100.) price = tmp[0] prices.append(price) prices[:10] # #### 정수 데이터 카운팅 np.unique([11, 11, 2, 2, 34, 34]) a = np.array(['a', 'b', 'b', 'c', 'a']) index, count = np.unique(a, return_counts=True) index count np.bincount([1, 1, 2, 2, 2, 3], minlength=6) D", 1), ("D", 1), ("C", 1), ("E", 1), ("G", 1), ("G", 1), ("C", 2)] music1 = np.array([]) for note, duration in sheet: music1 = np.append(music1, create_sound(freq_dict[note], duration * 0.5)) sd.play(music1, fs) # + # Listen to some EEG signal, what does it sound like import pickle # open a file, where you stored the pickled data eeg_file = open('./res/meditation_focus.pickle', 'rb') # dump information to that file eeg_data = pickle.load(eeg_file) print(eeg_data) # + meditation = np.array(eeg_data["meditation"]) normal_denom = np.max(np.abs(meditation)) plt.plot(meditation[960:1920]) meditation /= normal_denom sd.play(meditation, fs) # + focus = np.array(eeg_data["focus"]) plt.plot(focus[960:1920]) focus /= normal_denom sd.play(focus, fs) # + # Write your file as a wav file from scipy.io import wavfile def write_wav(filename_no_ext, fs, sound): wavfile.write(filename_no_ext + '.wav', fs, sound) # - # # All workshop material available online at # [https://github.com/adis300/py-music](https://github.com/adis300/py-music) # # <img src="./img/repo_address.png" /> th.join(target_img_path, 'x_test', filename + '.npy'), resized) np.save(os.path.join(target_img_path, 'y_test', filename + '.npy'), norm) # -
2,594
/notebook.ipynb
46031ea0ec8412bb14f80fb072caeb4f4f7db9f6
[]
no_license
andyheko/Predict-Blood-Donations
https://github.com/andyheko/Predict-Blood-Donations
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
28,302
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # START NOTEBOOK # ## Set-up # + from hydroplots import * from leach_hydrology import * import numpy as np import mpld3 mpld3.enable_notebook() from mpld3 import plugins # Plot graphs within this document # %matplotlib inline # Plot graphs outside (for interaction) # # %matplotlib from pestmob import * from mixinglayer import * # - # ### Microcosm d = (14.93 * 2) # Diameter of falcon tube (mm) area = ((d / 2) ** 2) * 3.1416 # (mm2) zl = soil_height1 = 20 # Mixing layer depth in mm # ### Soil - Hydrological Characteristics 13.45*10/60 ''' Hydrological controlling parameters''' # Alteck (Martine Trautmann, sampled pre-event) porosity_crop = 0.61 # Crop soil kSat_crop = 2.24 # mm/min (13.45 cm/h - Crop Soil) kSat_crop2 = kSat_crop/100 # ov_1 = 0.20 # Initial water content m3. m-3 ovSat_crop = 0.61 # Saturated water content (assumed) psi_crop = 1100 # soil suction Alteck (mm) psi_crop2 = 617.0 # soil suction Alteck (mm) # (Lefrancq, 2014: 61.7 cm , p.160; 110 cm, p.189) # # Results # ## Hydrology - 1st pulse # ##### Observed percolation # + # fresh, fresh, aged, aged # all at 6 min, high inetnesity leach_high_6min = np.array([16.253, 12.958, 17.536, 14.29]) # all at 12 min, med intensity leach_med_12min = np.array([10.089, 5.902, 13.981, 10.602]) # all at 30min, med intensity leach_med_30min = np.array([49.197, 40.402, 45.772, 47.201]) # all at 30min, low intensity leach_low_30min = np.array([20.037, 17.508, 22.376, 20.085]) # - # #### Computation water_data = leachsim(kSat = kSat_crop, soil_height=soil_height1, soil = 'Alteck', psi=psi_crop) water_eval = leachsim2( leach_high_6min, leach_med_12min, leach_med_30min, leach_low_30min, kSat = [kSat_crop], soil_height=soil_height1, soil = 'Alteck', dtGA = 1, AGED = False, first_cycle=True ) water_eval = leachsim2( leach_high_6min, leach_med_12min, leach_med_30min, leach_low_30min, kSat = [kSat_crop], soil_height=soil_height1, soil = 'Alteck', dtGA = 1, AGED = True, first_cycle=True ) # + # Time cum_time_30min = water_data[:, 0] # Cummulative infiltration cum_inf_135mmh = water_data[:, 4] cum_inf_55mmh = water_data[:, 5] cum_inf_30mmh = water_data[:, 6] # Cummulative leaching cum_leach_135mmh = water_data[:, 7] cum_leach_55mmh = water_data[:, 8] cum_leach_30mmh = water_data[:, 9] # Ponding roff_135mmh = water_data[:, 10] roff_55mmh = water_data[:, 11] roff_30mmh = water_data[:, 12] # Cummulative ponding cum_roff_135mmh = water_data[:, 13] cum_roff_55mmh = water_data[:, 14] cum_roff_30mmh = water_data[:, 15] infil_135mmh = water_data[:, 16] infil_55mmh = water_data[:, 17] infil_30mmh = water_data[:, 18] percol_data1 = stackdata3(cum_time_30min, cum_leach_135mmh, cum_leach_55mmh, cum_leach_30mmh) runoff_data1 = stackdata3(cum_time_30min, cum_roff_135mmh, cum_roff_55mmh, cum_roff_30mmh) infil_data1 = stackdata3(cum_time_30min, infil_135mmh, infil_55mmh, infil_30mmh) time_size_135mmh = water_data[:, 19] time_size_55mmhA = water_data[:, 20] time_size_55mmhB = water_data[:, 20] time_size_30mmh = water_data[:, 21] time_sizes1 = [time_size_135mmh, time_size_135mmh, time_size_55mmhA, time_size_55mmhA, time_size_55mmhB, time_size_55mmhB, time_size_30mmh, time_size_30mmh] # - hydroplot(percol_data1, "Percolated at 135mm/h", "Percolated at 55mm/h", "Percolated at 30mm/h", leach_high_6min, leach_med_12min, leach_med_30min, leach_low_30min, "Leached Volume [mL] - Crop Soil 1st Pulse") # ## Transport - 1st pulse # ** Initial and observed mass in leachate and ponding ** # + # Dictionary contains: # Scenario: (initial_mass, leached_mass_observed, ponded_mass_obs, initial_mass_error, error_leach, error_pond) mxCr_dict_S_1st = { 'a_high_0d': (1818.12, 138.1, 'nan', 755.5, 8, 0.), 'b_high_1d': (1472.67, 207.1, 'nan', 631.3, 12, 0.), 'c_med12_0d': (1818.12, 201.0, 'nan', 755.5, 11, 0.), 'd_med12_1d': (1472.67, 50.4, 'nan', 631.3, 3, 0.), 'e_med30_0d': (1818.12, 641.8, 'nan', 755.5, 36, 0.), 'f_med30_1d': (1472.67, 356.8, 'nan', 631.3, 20, 0.), 'g_low_0d': (1818.12, 177.0, 'nan', 755.5, 10, 0.), 'h_low_1d': (1472.67, 293.5, 'nan', 631.3, 16, 0.) } # Dictionary contains: # Scenario: (initial_mass, leached_mass_observed, ponded_mass_obs, initial_mass_error, error_lecah, error_pond) mxCr_dict_L_1st = { 'a_high_0d': (1518.06, 145.4, 'nan', 648.3, 8, 0.), 'b_high_1d': (1413.28, 283.5, 'nan', 597.4, 16, 0.), 'c_med12_0d': (1518.06, 158.4, 'nan', 648.3, 9, 0.), 'd_med12_1d': (1413.28, 262.3, 'nan' , 597.4, 15, 0.), 'e_med30_0d': (1518.06, 674.9, 'nan', 648.3, 38, 0.), 'f_med30_1d': (1413.28, 360.2, 'nan', 597.4, 20, 0.), 'g_low_0d': (1518.06, 418.2, 'nan', 648.3, 23, 0.), 'h_low_1d': (1413.28, 480.9, 'nan', 597.4, 27, 0.) } # - # ### Soil - Transport Charachteristics # + # Soil characteristics # Initial bulk density options: # pb_crop_i = 0.1/10**3 # bulk density (g/cm^3) -> g/mm^3 (FAKE TEST) # pb_crop_i1 = 0.99/10**3 # bulk density (g/cm^3) -> g/mm^3 (M. Trautmann) pb_crop_i1 = 2.61/10**3 # inital 1st pulse, calc. from experiment cond. pb_crop_i2 = 3.59/10**3 # initial 2nd pulse, calc. from experiment cond. # Final bulk density options (1st and 2nd pulses) pb_crop_f1 = 3.59/10**3 # final1, calculated from experimental conditions. # pb_crop_f1 = 13.59/10**3 # final1, TEST pb_crop_f2 = 3.76/10**3 # final2, calculated from experimental conditions. # pb_crop_f2 = 13.76/10**3 # final2, TEST porosity_crop = 0.61 # Crop soil # Assumed (used to calculate Reynolds number) runoff_vel = 20.0 # mm/min # Fraction organic matter and carbon (Scenario 1) fom_crop_sterile = 3.87/100.0 fom_crop_untreat = 5.51/100.0 foc_crop_sterile = 0.58*fom_crop_sterile foc_crop_untreat = 0.58*fom_crop_untreat # Soil characteristics (OC Black & Walkley - Scenario 2) # foc_crop_untreat2 = 2.04/100 # foc_crop_sterile2 = 0.70*foc_crop_untreat2 # - # ### Metalaxyl Properties # + # Pesticide Koc Koc_mexyl = [163.0, 80, 50, 40.0, 30, 20] # [(a) , (b), (c)] [ml/g] # Koc_mexyl = [30, 25, 20, 20.0, 20, 20] # Koc_mexyl = [50, 30, 20, 10, 2, 1] Koc_mexyl = np.array(Koc_mexyl)*10**3 # [mm3/g] # Kd (a) - NPIC @ http://npic.orst.edu/ingred/ppdmove.htm Kd_mexylA_crop_sterile = Koc_mexyl[0]*foc_crop_sterile Kd_mexylA_crop_untreat = Koc_mexyl[0]*foc_crop_untreat # Kd (b) - PAN @ http://www.pesticideinfo.org/ Kd_mexylB_crop_sterile = Koc_mexyl[1]*foc_crop_sterile Kd_mexylB_crop_untreat = Koc_mexyl[1]*foc_crop_untreat # Kd (c) - https://toxnet.nlm.nih.gov/cgi-bin/sis/search/a?dbs+hsdb:@term+@DOCNO+7061 Kd_mexylC_crop_sterile = Koc_mexyl[2]*foc_crop_sterile Kd_mexylC_crop_untreat = Koc_mexyl[2]*foc_crop_untreat Kd_mexylD_crop_sterile = Koc_mexyl[3]*foc_crop_sterile Kd_mexylD_crop_untreat = Koc_mexyl[3]*foc_crop_untreat Kd_mexylE_crop_sterile = Koc_mexyl[4]*foc_crop_sterile Kd_mexylE_crop_untreat = Koc_mexyl[4]*foc_crop_untreat Kd_mexylF_crop_sterile = Koc_mexyl[5]*foc_crop_sterile Kd_mexylF_crop_untreat = Koc_mexyl[5]*foc_crop_untreat Kd_mexyl_sterile = [Kd_mexylA_crop_sterile, Kd_mexylB_crop_sterile, Kd_mexylC_crop_sterile, Kd_mexylD_crop_sterile, Kd_mexylE_crop_sterile, Kd_mexylF_crop_sterile] Kd_mexyl_living = [Kd_mexylA_crop_untreat, Kd_mexylB_crop_untreat, Kd_mexylC_crop_untreat, Kd_mexylD_crop_untreat, Kd_mexylE_crop_untreat, Kd_mexylF_crop_untreat] # + # kdmx_array = np.asarray(Kd_mexyl_sterile) # np.log10(kdmx_array) # - # ### Computation transport - 1st pulse # Any length unit input must be: "mm" pest_sterile_1st = pest_test3( Kd_mexyl_sterile, mxCr_dict_S_1st, pb_crop_i1, pb_crop_f1, ovSat_crop, percol_data1, percol_data1, runoff_data1, runoff_data1, time_sizes1, area, soil_height1, d, runoff_vel, KFILM = True, first_cycle = True, living = False) # Any length unit input must be: "mm" pest_living_1st = pest_test3( Kd_mexyl_living, mxCr_dict_L_1st, pb_crop_i1, pb_crop_f1, ovSat_crop, percol_data1, percol_data1, runoff_data1, runoff_data1, time_sizes1, area, soil_height1, d, runoff_vel, KFILM = True, first_cycle = True, living = True) # #### Sterile time series # + # Cumulative leachate sterilized high_0d_cum_mass_out_dt = pest_sterile_1st[:, 1] high_1d_cum_mass_out_dt = pest_sterile_1st[:, 2] med12_0d_cum_mass_out_dt = pest_sterile_1st[:, 3] med12_1d_cum_mass_out_dt = pest_sterile_1st[:, 4] med30_0d_cum_mass_out_dt = pest_sterile_1st[:, 5] med30_1d_cum_mass_out_dt = pest_sterile_1st[:, 6] low_0d_cum_mass_out_dt = pest_sterile_1st[:, 7] low_1d_cum_mass_out_dt = pest_sterile_1st[:, 8] # Ponded mass high_0d_overmass_dt = pest_sterile_1st[:, 9] high_1d_overmass_dt = pest_sterile_1st[:, 10] med12_0d_overmass_dt = pest_sterile_1st[:, 11] med12_1d_overmass_dt = pest_sterile_1st[:, 12] med30_0d_overmass_dt = pest_sterile_1st[:, 13] med30_1d_overmass_dt = pest_sterile_1st[:, 14] low_0d_overmass_dt = pest_sterile_1st[:, 15] low_1d_overmass_dt = pest_sterile_1st[:, 16] mass_percol_sterile1 = stackdata8(cum_time_30min, high_0d_cum_mass_out_dt, high_1d_cum_mass_out_dt, med12_0d_cum_mass_out_dt, med12_1d_cum_mass_out_dt, med30_0d_cum_mass_out_dt, med30_1d_cum_mass_out_dt, low_0d_cum_mass_out_dt, low_1d_cum_mass_out_dt) mass_pond_sterile1 = stackdata8(cum_time_30min, high_0d_overmass_dt, high_1d_overmass_dt, med12_0d_overmass_dt, med12_1d_overmass_dt, med30_0d_overmass_dt, med30_1d_overmass_dt, low_0d_overmass_dt, low_1d_overmass_dt) # - # #### Living time series # + # Cumulative leachate high_0d_cum_mass_out_dt = pest_living_1st[:, 1] high_1d_cum_mass_out_dt = pest_living_1st[:, 2] med12_0d_cum_mass_out_dt = pest_living_1st[:, 3] med12_1d_cum_mass_out_dt = pest_living_1st[:, 4] med30_0d_cum_mass_out_dt = pest_living_1st[:, 5] med30_1d_cum_mass_out_dt = pest_living_1st[:, 6] low_0d_cum_mass_out_dt = pest_living_1st[:, 7] low_1d_cum_mass_out_dt = pest_living_1st[:, 8] # Ponded mass high_0d_overmass_dt = pest_living_1st[:, 9] high_1d_overmass_dt = pest_living_1st[:, 10] med12_0d_overmass_dt = pest_living_1st[:, 11] med12_1d_overmass_dt = pest_living_1st[:, 12] med30_0d_overmass_dt = pest_living_1st[:, 13] med30_1d_overmass_dt = pest_living_1st[:, 14] low_0d_overmass_dt = pest_living_1st[:, 15] low_1d_overmass_dt = pest_living_1st[:, 16] mass_percol_living1 = stackdata8(cum_time_30min, high_0d_cum_mass_out_dt, high_1d_cum_mass_out_dt, med12_0d_cum_mass_out_dt, med12_1d_cum_mass_out_dt, med30_0d_cum_mass_out_dt, med30_1d_cum_mass_out_dt, low_0d_cum_mass_out_dt, low_1d_cum_mass_out_dt) mass_pond_living1 = stackdata8(cum_time_30min, high_0d_overmass_dt, high_1d_overmass_dt, med12_0d_overmass_dt, med12_1d_overmass_dt, med30_0d_overmass_dt, med30_1d_overmass_dt, low_0d_overmass_dt, low_1d_overmass_dt) # - # ## Plotting transport - Metalaxyl # ### Sterile (1st Pulse, Crop Soil) pestiplot_condition( mass_percol_sterile1, mxCr_dict_S_1st, 'Metalaxyl', soil_type='Crop Soil', cycle = '1st pulse', LEACH = True, STERILE = True ) # ### Living (1st Pulse, Crop Soil) pestiplot_condition( mass_percol_living1, mxCr_dict_L_1st, 'Metalaxyl', soil_type='Crop Soil', cycle = '1st pulse', LEACH = True, STERILE = False ) # # Hydrology - 2nd pulse ''' Hydrological controlling parameters''' ov_2 = ovSat_crop - 0.038 # Initial water content m3. m-3 psi_crop = 1100 # soil suction Alteck mm psi_cropB = 617 # soil suction Alteck mm # (Lefrancq, 2014: 61.7 cm , p.160; 110 cm, p.189) soil_height2 = 20 # mm # **Observed Percolation - 2nd pulse** # + # Order if array is: # [sterile, untreat, sterile_aged, untreat_aged] # At 6 min, high inetnesity leach_high_6min = np.array([14.192, 8.245, 2.410, 5.469]) # At 12 min, med intensity leach_med_12min = np.array([18.672, 19.0, 0.830, 11.407]) # At 30min, med intensity leach_med_30min = np.array([12.697, 2.473, 3.52, 20.291]) # At 30min, low intensity leach_low_30min = np.array([29.656, 9.375, 0.409, 3.385]) # - # **Observed Ponding - 2nd pulse** # + # [sterile, untreat, sterile_aged, untreat_aged] # all at 6 min, high inetnesity roff_high_6min = np.array([10.824, 20.935, 24.75, 19.041]) # all at 12 min, med intensity roff_med_12min = np.array([0, 3.907, 19.436, 7.313]) # all at 30min, med intensity roff_med_30min = np.array([43.764, 28.911, 51.964, 33.478]) # all at 30min, low intensity roff_low_30min = np.array([0, 22.618, 28.598, 27.314]) # - # ### Inverse Ksat determination # # Based on hisotric rainfall pattern, fresh and aged. water2_ktest_fresh = leachsim2( leach_high_6min, leach_med_12min, leach_med_30min, leach_low_30min, kSat = [kSat_crop/25, kSat_crop/50, kSat_crop/100, kSat_crop/125, kSat_crop/150, kSat_crop/175, kSat_crop/200, kSat_crop/250], soil_height=soil_height2, soil = 'Alteck', dtGA = 1, AGED = False ) water2_ktest_aged = leachsim2( leach_high_6min, leach_med_12min, leach_med_30min, leach_low_30min, kSat = [kSat_crop/50, kSat_crop/100, kSat_crop/125, kSat_crop/150, kSat_crop/175, kSat_crop/200, kSat_crop/250, kSat_crop/700, kSat_crop/1000], soil_height=soil_height2, soil = 'Alteck', dtGA = 1, AGED = True ) # #### Time series fresh # + # Time axis cum_time_30min = water2_ktest_fresh[:, 0] # Cumulative leachate cum_leach_135mmh = water2_ktest_fresh[:, 9] cum_leach_55mmhA = water2_ktest_fresh[:, 10] cum_leach_55mmhB = water2_ktest_fresh[:, 11] cum_leach_30mmh = water2_ktest_fresh[:, 12] # Group each compartment for graphing percol_data2_fresh = stackdata4( cum_time_30min, cum_leach_135mmh, cum_leach_55mmhA, cum_leach_55mmhB, cum_leach_30mmh) # Ponding fresh roff_135mmh = water2_ktest_fresh[:, 13] roff_55mmhA = water2_ktest_fresh[:, 14] roff_55mmhB = water2_ktest_fresh[:, 15] roff_30mmh = water2_ktest_fresh[:, 16] # Cummulative ponding cum_roff_135mmh = water2_ktest_fresh[:, 17] cum_roff_55mmhA = water2_ktest_fresh[:, 18] cum_roff_55mmhB = water2_ktest_fresh[:, 19] cum_roff_30mmh = water2_ktest_fresh[:, 20] runoff_data2_fresh = stackdata4( cum_time_30min, cum_roff_135mmh, cum_roff_55mmhA, cum_roff_55mmhB, cum_roff_30mmh) time_size_135mmh = water2_ktest_fresh[:, 25] time_size_55mmhA = water2_ktest_fresh[:, 26] time_size_55mmhB = water2_ktest_fresh[:, 27] time_size_30mmh = water2_ktest_fresh[:, 28] time_sizes2 = [time_size_135mmh, time_size_55mmhA, time_size_55mmhB, time_size_30mmh] # - # #### Time series aged # + # Time axis cum_time_30min = water2_ktest_aged[:, 0] # Cumulative leachate cum_leach_135mmh = water2_ktest_aged[:, 9] cum_leach_55mmhA = water2_ktest_aged[:, 10] cum_leach_55mmhB = water2_ktest_aged[:, 11] cum_leach_30mmh = water2_ktest_aged[:, 12] # Group each compartment for graphing percol_data2_aged = stackdata4( cum_time_30min, cum_leach_135mmh, cum_leach_55mmhA, cum_leach_55mmhB, cum_leach_30mmh) # Ponding Aged roff_135mmh = water2_ktest_aged[:, 13] roff_55mmhA = water2_ktest_aged[:, 14] roff_55mmhB = water2_ktest_aged[:, 15] roff_30mmh = water2_ktest_aged[:, 16] # Cummulative ponding cum_roff_135mmh = water2_ktest_aged[:, 17] cum_roff_55mmhA = water2_ktest_aged[:, 18] cum_roff_55mmhB = water2_ktest_aged[:, 19] cum_roff_30mmh = water2_ktest_aged[:, 20] runoff_data2_aged = stackdata4( cum_time_30min, cum_roff_135mmh, cum_roff_55mmhA, cum_roff_55mmhB, cum_roff_30mmh) time_size_135mmh = water2_ktest_aged[:, 25] time_size_55mmhA = water2_ktest_aged[:, 26] time_size_55mmhB = water2_ktest_aged[:, 27] time_size_30mmh = water2_ktest_aged[:, 28] time_sizes2 = [time_size_135mmh, time_size_135mmh, time_size_55mmhA, time_size_55mmhA, time_size_55mmhB, time_size_55mmhB, time_size_30mmh, time_size_30mmh] # - # ### Percolation & ponding - 2nd pulse (fresh soil) hydroplot2(percol_data2_fresh, "Leached, 135mm/h", "Leached, 55mm/h - A", "Leached, 55mm/h - B", "Leached, 30mm/h", leach_high_6min, leach_med_12min, leach_med_30min, leach_low_30min, "Leached Volume [mL] - Crop Soil 2nd Pulse", AGED = False) hydroplot2(runoff_data2_fresh, "Ponded, 135mm/h", "Ponded, 55mm/h - A", "Ponded, 55mm/h - B", "Ponded, 30mm/h", roff_high_6min, roff_med_12min, roff_med_30min, roff_low_30min, "Ponded Volume [mL] - Crop Soil 2nd Pulse", AGED = False) # ### Percolation & ponding - 2nd pulse (aged soil) hydroplot2(percol_data2_aged, "Leached, 135mm/h - A", "Leached, 55mm/h - B", "Leached, 55mm/h - C", "Leached, 30mm/h - D", leach_high_6min, leach_med_12min, leach_med_30min, leach_low_30min, "Leached Volume [mL] - Crop Soil 2nd Pulse", AGED = True) hydroplot2(runoff_data2_aged, "Ponded, 135mm/h", "Ponded, 55mm/h - A", "Ponded, 55mm/h - B", "Ponded, 30mm/h", roff_high_6min, roff_med_12min, roff_med_30min, roff_low_30min, "Ponded Volume [mL] - Crop Soil 2nd Pulse", AGED = True) # ## Transport - 2nd pulse # ** Initial and observed mass in leachate and ponding ** # + # Dictionary contains: # Scenario: # (initial_mass, leached_mass_observed, ponded_mass_obs, # initial_mass_error, error_leach, error_pond) mxCr_dict_S_2nd = { 'a_high_0d': (1496.75, 8.35, 5.7, 763.2, 0.5, 0.3), 'b_high_1d': (1127.52, 37.57, 4.1, 642.9, 2.0, 0.2), 'c_med12_0d': (1440.72, 290.3, 'nan', 766.8, 'nan', 'nan'), 'd_med12_1d': (1267.11, 'nan', 9.2, 634.1, 'nan', 0.5), 'e_med30_0d': (1047.95, 93.3, 4.3, 791.4, 5.2, 0.2), 'f_med30_1d': (994.09, 82.2, 14.0, 651.3, 4.6, 0.8), 'g_low_0d': (1462.08, 285.3, 'nan', 765.4, 'nan', 'nan'), 'h_low_1d': (1050.48, 'nan', 12.4, 647.7, 'nan', 0.7) } mxCr_dict_L_2nd = { 'a_high_0d': (1222.86, 175.44, 4.7, 656.4, 9.8, 0.3), 'b_high_1d': (1006.54, 40.03, 3.2, 613.3, 1.7, 0.2), 'c_med12_0d': (1211.28, 272.5, 1.8, 657.1, 15.2, 0.1), 'd_med12_1d': (1025.43, 168.5, 'nan', 612.1, 9.4, 'nan'), 'e_med30_0d': (751.13, 35.1, 8.9, 686.0, 2.1, 0.5), 'f_med30_1d': (938.23, 146.1, 0.1, 617.5, 8.2, 0.008), 'g_low_0d': (979.82, 86.0, 5.8, 671.7, 4.8, 0.3), 'h_low_1d': (830.68, 76.5, 9.6, 624.3, 4.3, 0.5) } # - mxCr_dict_S_2nd['a_high_0d'][1] # #### Change in organic matter characterisitics # + # Fraction organic matter and carbon fom_crop_sterile = 3.85/100.0 fom_crop_untreat = 5.50/100.0 foc_crop_sterile = 0.58*fom_crop_sterile foc_crop_untreat = 0.58*fom_crop_untreat # Pesticide Koc # Koc_mexyl = [200, 182.0, 163.0, 100, 80.0, 50, 30] # [ml/g] # Koc_mexyl = [50, 30, 20, 10, 2] # Koc_mexyl = np.array(Koc_mexyl)*10**3 # [mm3/g] # Kd (a) - NPIC @ http://npic.orst.edu/ingred/ppdmove.htm Kd_mexylA_crop_sterile = Koc_mexyl[0]*foc_crop_sterile Kd_mexylA_crop_untreat = Koc_mexyl[0]*foc_crop_untreat # Kd (b) - PAN @ http://www.pesticideinfo.org/ Kd_mexylB_crop_sterile = Koc_mexyl[1]*foc_crop_sterile Kd_mexylB_crop_untreat = Koc_mexyl[1]*foc_crop_untreat # Kd (c) # https://toxnet.nlm.nih.gov/cgi-bin/sis/search/a?dbs+hsdb:@term+@DOCNO+7061 Kd_mexylC_crop_sterile = Koc_mexyl[2]*foc_crop_sterile Kd_mexylC_crop_untreat = Koc_mexyl[2]*foc_crop_untreat Kd_mexylD_crop_sterile = Koc_mexyl[3]*foc_crop_sterile Kd_mexylD_crop_untreat = Koc_mexyl[3]*foc_crop_untreat Kd_mexylE_crop_sterile = Koc_mexyl[4]*foc_crop_sterile Kd_mexylE_crop_untreat = Koc_mexyl[4]*foc_crop_untreat Kd_mexylF_crop_sterile = Koc_mexyl[5]*foc_crop_sterile Kd_mexylF_crop_untreat = Koc_mexyl[5]*foc_crop_untreat Kd_mexyl_sterile2 = [Kd_mexylA_crop_sterile, Kd_mexylB_crop_sterile, Kd_mexylC_crop_sterile, Kd_mexylD_crop_sterile, Kd_mexylE_crop_sterile, Kd_mexylF_crop_sterile] Kd_mexyl_living2 = [Kd_mexylA_crop_untreat, Kd_mexylB_crop_untreat, Kd_mexylC_crop_untreat, Kd_mexylD_crop_untreat, Kd_mexylE_crop_untreat, Kd_mexylF_crop_untreat] # - # ### Computation transport - 2nd Pulse # #### Kd optimal (sterile) # + mx_sterile_2nd = pest_test3( Kd_mexyl_sterile2, mxCr_dict_S_2nd, pb_crop_i2, pb_crop_f2, ovSat_crop, percol_data2_fresh, runoff_data2_fresh, percol_data2_aged, runoff_data2_aged, time_sizes2, area, soil_height2, d, runoff_vel, first_cycle = False, living = False) # - # #### Kd optimal (living) # + mx_living_2nd = pest_test3( Kd_mexyl_living2, mxCr_dict_L_2nd, pb_crop_i2, pb_crop_f2, ovSat_crop, percol_data2_fresh, runoff_data2_fresh, percol_data2_aged, runoff_data2_aged, time_sizes2, area, soil_height2, d, runoff_vel, first_cycle = False, living = True) # - # #### Sterile time series # + # Time axis cum_time_30min = mx_sterile_2nd[:, 0] # Cumulative leachate sterilized high_0d_cum_mass_out_dt = mx_sterile_2nd[:, 1] high_1d_cum_mass_out_dt = mx_sterile_2nd[:, 2] med12_0d_cum_mass_out_dt = mx_sterile_2nd[:, 3] med12_1d_cum_mass_out_dt = mx_sterile_2nd[:, 4] med30_0d_cum_mass_out_dt = mx_sterile_2nd[:, 5] med30_1d_cum_mass_out_dt = mx_sterile_2nd[:, 6] low_0d_cum_mass_out_dt = mx_sterile_2nd[:, 7] low_1d_cum_mass_out_dt = mx_sterile_2nd[:, 8] # Ponded mass high_0d_overmass_dt = mx_sterile_2nd[:, 9] high_1d_overmass_dt = mx_sterile_2nd[:, 10] med12_0d_overmass_dt = mx_sterile_2nd[:, 11] med12_1d_overmass_dt = mx_sterile_2nd[:, 12] med30_0d_overmass_dt = mx_sterile_2nd[:, 13] med30_1d_overmass_dt = mx_sterile_2nd[:, 14] low_0d_overmass_dt = mx_sterile_2nd[:, 15] low_1d_overmass_dt = mx_sterile_2nd[:, 16] mass_percol_sterile2 = stackdata8( cum_time_30min, high_0d_cum_mass_out_dt, high_1d_cum_mass_out_dt, med12_0d_cum_mass_out_dt, med12_1d_cum_mass_out_dt, med30_0d_cum_mass_out_dt, med30_1d_cum_mass_out_dt, low_0d_cum_mass_out_dt, low_1d_cum_mass_out_dt) mass_pond_sterile2 = stackdata8( cum_time_30min, high_0d_overmass_dt, high_1d_overmass_dt, med12_0d_overmass_dt, med12_1d_overmass_dt, med30_0d_overmass_dt, med30_1d_overmass_dt, low_0d_overmass_dt, low_1d_overmass_dt) # - # #### Living time series # + # Time axis cum_time_30min = mx_living_2nd[:, 0] # Cumulative leachate sterilized high_0d_cum_mass_out_dt = mx_living_2nd[:, 1] high_1d_cum_mass_out_dt = mx_living_2nd[:, 2] med12_0d_cum_mass_out_dt = mx_living_2nd[:, 3] med12_1d_cum_mass_out_dt = mx_living_2nd[:, 4] med30_0d_cum_mass_out_dt = mx_living_2nd[:, 5] med30_1d_cum_mass_out_dt = mx_living_2nd[:, 6] low_0d_cum_mass_out_dt = mx_living_2nd[:, 7] low_1d_cum_mass_out_dt = mx_living_2nd[:, 8] # Ponded mass high_0d_overmass_dt = mx_living_2nd[:, 9] high_1d_overmass_dt = mx_living_2nd[:, 10] med12_0d_overmass_dt = mx_living_2nd[:, 11] med12_1d_overmass_dt = mx_living_2nd[:, 12] med30_0d_overmass_dt = mx_living_2nd[:, 13] med30_1d_overmass_dt = mx_living_2nd[:, 14] low_0d_overmass_dt = mx_living_2nd[:, 15] low_1d_overmass_dt = mx_living_2nd[:, 16] mass_percol_living2 = stackdata8( cum_time_30min, high_0d_cum_mass_out_dt, high_1d_cum_mass_out_dt, med12_0d_cum_mass_out_dt, med12_1d_cum_mass_out_dt, med30_0d_cum_mass_out_dt, med30_1d_cum_mass_out_dt, low_0d_cum_mass_out_dt, low_1d_cum_mass_out_dt) mass_pond_living2 = stackdata8( cum_time_30min, high_0d_overmass_dt, high_1d_overmass_dt, med12_0d_overmass_dt, med12_1d_overmass_dt, med30_0d_overmass_dt, med30_1d_overmass_dt, low_0d_overmass_dt, low_1d_overmass_dt) # - # ## Plotting transport - Metalaxyl # ### Sterile (2nd Pulse, Crop Soil) pestiplot_condition( mass_percol_sterile2, mxCr_dict_S_2nd, 'Metalaxyl', soil_type='Crop Soil', cycle = '2nd pulse', LEACH = True, STERILE = True ) pestiplot_condition( mass_pond_sterile2, mxCr_dict_S_2nd, 'Metalaxyl', soil_type='Crop Soil', cycle = '2nd pulse', LEACH = False, STERILE = True ) # ### Living (2nd Pulse, Crop Soil) pestiplot_condition( mass_percol_living2, mxCr_dict_L_2nd, 'Metalaxyl', soil_type='Crop Soil', cycle = '2nd pulse', LEACH = True, STERILE = False ) pestiplot_condition( mass_pond_living2, mxCr_dict_L_2nd, 'Metalaxyl', soil_type='Crop Soil', cycle = '2nd pulse', LEACH = False, STERILE = False ) # END NOTEBOOK
25,777
/Without_Rotation-fastai_models_Benchmark_1.ipynb
00b880362e37eb7769f307618a2f47569a0093c2
[]
no_license
gyc599050377/Symmetry-Identification
https://github.com/gyc599050377/Symmetry-Identification
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
477,395
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python (gyc) # language: python # name: gyc # --- # + from fastai.vision import * root = Path('/data/WP_dataset/') data = ImageDataBunch.from_folder(root) data.show_batch(rows=5, figsize=(7,6)) # - print(data.classes) len(data.classes) learner_resnet50 = cnn_learner(data, models.resnet50, metrics=error_rate) learner_resnet50 learner_resnet50.model = torch.nn.DataParallel(learner_resnet50.model) # Trains the freezed model for 1 epochs learner_resnet50.fit_one_cycle(1) learner_resnet50.save('resnet50-1') # Try with unfreeze and train for 1 epoch: # + learner_resnet50.unfreeze() learner_resnet50.fit_one_cycle(1) # - learner_resnet50.save('resnet50-2') # If it getting worse, reload last saved learner and try to find a better lr. # # Or we can just compare the result between default lr and mannually changed lr # + learner_resnet50.load('resnet50-1') learner_resnet50.lr_find() learner_resnet50.recorder.plot() # + learner_resnet50.unfreeze() learner_resnet50.fit_one_cycle(1, max_lr=slice(7e-7,1e-4)) # - learner_resnet50.save('resnet50-3') pwd # + language="javascript" # IPython.notebook.save_notebook() # - learner_resnet50.fit_one_cycle(4, max_lr=slice(1e-6,1e-4)) # # vgg19_bn learner_vgg19_bn = cnn_learner(data, models.vgg19_bn, metrics=error_rate) learner_vgg19_bn learner_vgg19_bn.model = torch.nn.DataParallel(learner_vgg19_bn.model) # + # Trains the freezed model for 1 epochs learner_vgg19_bn.fit_one_cycle(1) # - learner_vgg19_bn.save('vgg19_bn-1') learn_vgg19_bn = cnn_learner(data, models.vgg19_bn, metrics=error_rate) learn # # Validate with pre-trained vgg19 learn.fit_one_cycle(1) learn.save('vgg-1') learn.load('vgg-1') # # Start training learn.unfreeze() learn.fit_one_cycle(4) learn.save('vgg-stage-2') learn.load('vgg-stage-2') # Fifth epoch learn.fit_one_cycle(1) learn.save('vgg-stage-3') interp = ClassificationInterpretation.from_learner(learn, ds_type=DatasetType.Train) interp.plot_confusion_matrix(figsize=(15,15), dpi=120) # # keep training learn.load('vgg-stage-3') # Train 5 epochs (from 5th epoch) learn.fit_one_cycle(5) learn.save('vgg-10epochs') interp = ClassificationInterpretation.from_learner(learn, ds_type=DatasetType.Train) interp.plot_confusion_matrix(figsize=(13,13), dpi=120) # # Validate with certain image sample # + from fastai.vision import * img = Image(pil2tensor(img_cv2, dtype=np.float32).div_(255)) aa = load_learner('/home/yig319/Dropbox/Symmetry-Identification') plt.imshow(img_cv2) # -
2,735
/Mito-Microscopy.ipynb
327f32136b4c77c85b085b879b91d32d27ca8aba
[]
no_license
gsolson/Mito-Microscopy
https://github.com/gsolson/Mito-Microscopy
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
2,759
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python [conda env:Anaconda3] # language: python # name: conda-env-Anaconda3-py # --- # + import pandas as pd import seaborn as sns import numpy as np import linearmodels as lm import matplotlib import math import statsmodels.formula.api as smf import statsmodels.api as sm from linearmodels.panel import PanelOLS from linearmodels.panel import RandomEffects from linearmodels.panel import FirstDifferenceOLS from linearmodels.panel import compare from matplotlib import pyplot as plt from pandas.api.types import is_numeric_dtype pd.set_option('display.max_columns', 500) # %matplotlib inline # - # read the main data set df=pd.read_csv('estimation_file.csv', encoding='utf-8') df.columns = df.columns.str.strip() # exclude if ridership is zero--missing data df = df[df['UPT_ADJ']>0] df['BUS_FLAG'] = np.where(df['Mode']=='Bus', 1, 0) df['RAIL_FLAG'] = np.where(df['Mode']=='Rail', 1, 0) # + # keep only 2014 and 2018, and set the index df = df[(df['Year']==2014) | (df['Year']==2018)] # set the indices df['ID'] = df['MNAME'] + '-' + df['Mode'] df=df.set_index(['ID','Year']) # - # do not transform categorical fields category_fields = ['BUS_FLAG', 'RAIL_FLAG', 'CLUSTER_GT', 'CLUSTER_APTA', 'CLUSTER_APTA_HML'] # + # start by dropping fields already present drop_fields = [] for col in df.columns: if '_diff' in col: drop_fields.append(col) if '_log' in col: drop_fields.append(col) if '_scale' in col: drop_fields.append(col) df = df.drop(columns=drop_fields) # - # keep only the numeric columns -- the estimation will give an error otherwise df = df.select_dtypes(include=[np.number]) # calculate the first difference and drop the boundary cases where the CBSA number changes df = df.sort_index() for col in df.columns: if not col in category_fields: df[col] = df[col].diff() # keep only 2018 df = df.reset_index() df = df[df['Year']==2018] df.to_csv('change_2014_2018.csv') nyc-tlc/trip+data/fhv_tripdata_2015-07.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2015-08.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2015-09.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2015-10.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2015-11.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2015-12.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2016-01.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2016-02.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2016-03.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2016-04.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2016-05.csv") os.system("curl -O https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2016-06.csv") os.system("mv fhv_tripdata_2015-07.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2015-08.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2015-09.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2015-10.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2015-11.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2015-12.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2016-01.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2016-02.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2016-03.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2016-04.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2016-05.csv " + os.getenv("PUIDATA")) os.system("mv fhv_tripdata_2016-06.csv " + os.getenv("PUIDATA")) # bacuase of these datasets' size, I put all downloaded files into one project folder easy for later deletion. # concate files path = r'Data/FHV' all_files = glob.glob(os.path.join(path, "*.csv")) df_from_each_file = (pd.read_csv(f) for f in all_files) concatenated_df = pd.concat(df_from_each_file, ignore_index=True) len(concatenated_df) concatenated_df.shape concatenated_df.dropna(axis=0, inplace=True) concatenated_df.shape concatenated_df.reset_index(inplace=True) FHV.drop([u'index'], axis = 1, inplace = True) concatenated_df.head() concatenated_df.to_csv('FHVcleaned.csv') # Project taxi zone to tract NYCcensus = gpd.GeoDataFrame.from_file('nyc tracts/nyu_2451_34513.shp') NYCcensus.head() NYCcensus.crs NYCcensus.plot(facecolor='w',edgecolor='k') taxizone = gpd.GeoDataFrame.from_file('taxi zone/taxi_zones.shp') taxizone.head() taxizone.crs taxizone.plot(figsize(10,10),facecolor='w',edgecolor='k') NYCcensus.crs = taxizone.crs zonecombined = gpd.sjoin(NYCcensus,taxizone,how="left",op="intersects") zonecombined.fillna(0, inplace = True) zonecombined zonecombined = zonecombined[zonecombined.index_right>0] zonecombined # + def get_size_of_intersection(base, taxi): return base['geometry'].intersection(taxi['geometry'].iloc[int(base['index_right'])]).area # - zonecombined['intersection_size'] = zonecombined.apply(lambda row : get_size_of_intersection(row, taxizone), axis=1) zonecombined.drop([u'statefp', u'countyfp', u'name', u'namelsad', u'mtfcc', u'funcstat', u'aland', u'awater', u'OBJECTID'], axis = 1, inplace = True) taxizone['area'] = taxizone.geometry.area taxizone zonecombined.reset_index(inplace=True) zonecombined['taxizone_area']=0 for k in range(len(zonecombined)): zonecombined['taxizone_area'][k] = taxizone['area'][int(zonecombined['LocationID'][k]-1)] zonecombined.rename(columns={"index": "zone_percentage"},inplace=True) zonecombined['zone_percentage']=zonecombined['intersection_size']/zonecombined['taxizone_area'] zonecombined.to_csv('combinezonefinal.csv', sep=',', index=True) # get each tracts trip quantities FHV = pd.read_csv('FHVcleaned.csv') FHV.head() FHV['Pickup_date'] = pd.to_datetime(FHV['Pickup_date']) FHV.drop([u'Unnamed: 0'], axis = 1, inplace = True) FHV['month'] = FHV.Pickup_date.dt.month import calendar FHV['month'] = FHV['month'].apply(lambda x: calendar.month_abbr[x]) FHV_quan = FHV.groupby(['locationID','month']).count() FHV_quan.reset_index(inplace=True) FHV_quan.drop([u'Pickup_date'], axis = 1, inplace = True) FHV_quan.drop([u'index'], axis = 1, inplace = True) FHV_quan.head() FHV_quan.rename(columns={'locationID':'LocationID'}, inplace=True) type(FHV_quan.LocationID[6]) type(zonecombined.LocationID[6]) intractsmerge = pd.merge(zonecombined, FHV_quan, how='inner', on='LocationID') intractsmerge.rename(columns={'Unnamed: 0':'pickups_intract'}, inplace=True) intractsmerge['Jan'] = intractsmerge.zone_percentage*intractsmerge.Jan intractsmerge['Feb'] = intractsmerge.zone_percentage*intractsmerge.Feb intractsmerge['Mar'] = intractsmerge.zone_percentage*intractsmerge.Mar intractsmerge['Apr'] = intractsmerge.zone_percentage*intractsmerge.Apr intractsmerge['May'] = intractsmerge.zone_percentage*intractsmerge.May intractsmerge['Jun'] = intractsmerge.zone_percentage*intractsmerge.Jun intractsmerge['Jul'] = intractsmerge.zone_percentage*intractsmerge.Jul intractsmerge['Aug'] = intractsmerge.zone_percentage*intractsmerge.Aug intractsmerge['Sep'] = intractsmerge.zone_percentage*intractsmerge.Sep intractsmerge['Oct'] = intractsmerge.zone_percentage*intractsmerge.Oct intractsmerge['Nov'] = intractsmerge.zone_percentage*intractsmerge.Nov intractsmerge['Dec'] = intractsmerge.zone_percentage*intractsmerge.Dec tractpickup = intractsmerge.groupby(['geoid']).sum() tractpickup.reset_index(inplace=True) tractpickup.head() tractpickup.drop([ u'zone_percentage', u'tractce',u'intptlat', u'intptlon', u'index_right', u'Shape_Leng', u'Shape_Area', u'LocationID', u'intersection_size', u'taxizone_area', u'pickup'], axis = 1, inplace = True) tractpickup.to_csv('FHV.csv') # # merge result FHV = pd.read_csv('FHV.csv') FHV.head() FHV.drop([u'Unnamed: 0'], axis = 1, inplace = True) FHV.columns FHV.head() FHV['12MonthTotal']= FHV[FHV.columns[1]] for i in range(2,13): FHV['12MonthTotal']+=FHV[FHV.columns[i]] FHV.head() FHV['countycode'] = FHV.GEOID.astype('str').apply(lambda x: x[2:5]) FHV.head() FHV.countycode.unique() # # each month total in all boroughs FHV_all = FHV.drop([u'GEOID', u'12MonthTotal', u'countycode'], axis = 1) FHV_all = FHV_all.T FHV_all.reset_index(inplace = True) FHV_all.rename(columns={'index':'month'}, inplace=True) FHV_all.head() type(FHV_all.month[6]) FHV_all.month = pd.to_datetime(FHV_all.month) type(FHV_all.month[6]) FHV_all['AllArea'] = FHV_all.sum(axis=1) FHV_all.head() # # each month total in non-manhattan boroughs FHV_nm = FHV[FHV.countycode.astype('int')!=61].drop([u'GEOID', u'12MonthTotal', u'countycode'], axis = 1) FHV_nm = FHV_nm.T FHV_nm.reset_index(inplace = True) FHV_nm.rename(columns={'index':'month'}, inplace=True) FHV_nm.head() FHV_nm.month = pd.to_datetime(FHV_nm.month) FHV_nm['NonManhattan'] = FHV_nm.sum(axis=1) FHV_nm.head() # # each month total in manhattan FHV_m = FHV[FHV.countycode.astype('int')==61].drop([u'GEOID', u'12MonthTotal', u'countycode'], axis = 1) FHV_m = FHV_m.T FHV_m.reset_index(inplace = True) FHV_m.rename(columns={'index':'month'}, inplace=True) FHV_m.head() FHV_m.month = pd.to_datetime(FHV_m.month) FHV_m['Manhattan'] = FHV_m.sum(axis=1) FHV_m.head() # # Plot all/in manhattan/in non-manhattan Together fig = pl.figure(figsize=(8,6)) pl.plot(FHV_all.month, FHV_all.AllArea) pl.plot(FHV_nm.month, FHV_nm.NonManhattan) pl.plot(FHV_m.month, FHV_m.Manhattan) pl.title('New York City For Hire Vehicle Monthly Usage', fontsize = 18) pl.xlabel('Time Line', fontsize = 13) pl.ylabel('Number of Rides', fontsize = 13) pl.legend(loc='best', fontsize = 12) def PlotTheDistribution (counts1, counts2, norm1, norm2, error1, error2, ylabel): fig = pl.figure(figsize(15,15)) ((counts1) / norm1).plot(kind="bar", color='IndianRed', yerr=[ ((error1) / norm1, (error1) / norm1)], alpha=0.5, label='NonManhattan PickUp') ax = ((counts2) / norm2).plot(kind="bar", color='SteelBlue', yerr=[ ((error2) / norm2, (error2) / norm2)], alpha=0.5, label='Manhattan PickUp') tmp = ax.xaxis.set_ticklabels(FHV_m.month.dt.strftime("%B %Y"), fontsize=20) ax.set_ylabel (ylabel,fontsize=16) ax.set_xlabel ("12 monthes",fontsize=16) ax.set_title ("For Hire Vehicle Rides In Manhattan And NonManhattan Area",fontsize=20) pl.legend(fontsize=20) error_m = np.sqrt(FHV_m.Manhattan) error_nm = np.sqrt(FHV_nm.NonManhattan) #Unnormalized with error info plot norm_nm = 1 norm_m = 1 PlotTheDistribution(FHV_nm.NonManhattan, FHV_m.Manhattan, norm_nm, norm_m, error_nm, error_m,'Number of Rides') #Normalized with error info plot norm_nm = FHV_nm.NonManhattan.sum() norm_m = FHV_m.Manhattan.sum() PlotTheDistribution(FHV_nm.NonManhattan, FHV_m.Manhattan, norm_nm, norm_m, error_nm, error_m,'Fraction of Rides') #Fraction of Pickups def PlotTheFraction(fraction1, fraction2, error1, error2, timeline): fig = plt.figure(figsize(8,8)) plt.errorbar([0.4], [fraction1], yerr=[error1], fmt='o', label='NonManhattan PickUp') plt.errorbar([0.2], [fraction2], yerr=[error2], fmt='o', label='Manhattan PickUp') plt.title(timeline, fontsize=20) plt.xlim(0, 0.5) plt.xticks([]) plt.ylabel("Fraction of normalized rides in or not in Manhattan Area",fontsize=16) plt.legend(fontsize=20) plt.show() # + #Fraction of that non-manhattan rides during not in 2015 #and the fraction that rides in 2016, and the same for the rides in manhattan nm2015 = sum(FHV_nm.NonManhattan[:6]) * 1.0 / norm_nm nm2016 = sum(FHV_nm.NonManhattan[6:])*1.0 / norm_nm nm2015_error = np.sqrt(sum(error_nm[:5]**2)) / norm_nm nm2016_error = np.sqrt(sum(error_nm[6:]**2)) / norm_nm m2015 = sum(FHV_m.Manhattan[:6]) * 1.0 / norm_m m2016 = sum(FHV_m.Manhattan[6:])*1.0 / norm_m m2015_error = np.sqrt(sum(error_m[:5]**2)) / norm_m m2016_error = np.sqrt(sum(error_m[6:]**2)) / norm_m print("NonManhattan: 2015 rides:{0:.4f}, 2016 rides:{1:.4f}, 2015 rides error:{2:.4f}, 2016 rides error:{3:.4f}"\ .format(nm2015, nm2016, nm2015_error, nm2016_error)) print("Manhattan: 2015 rides :{0:.4f}, 2016 rides:{1:.4f}, 2015 rides error:{2:.4f}, 2016 rides error:{3:.4f}"\ .format(m2015, m2016, m2015_error, m2016_error)) # - #Plot the fraction of Riders in 2015 with error PlotTheFraction(nm2015, m2015, nm2015_error, m2015_error, '2015 rides') #Plot the fraction of Riders in 2016 with error PlotTheFraction(nm2016, m2016, nm2016_error, m2016_error, '2016 rides') P0mP1 = nm2016 - m2016 print ("difference between fraction of 2016 rides not in manhattan and rides in manhattan: ", P0mP1) if P0mP1 <= 0: print("In accordance with Null Hypothesis") else: print ("We must check the significance before we reject the Null Hypothesis") # #Chi Square Contigency Table m2016 # | Rides | 2016 | not in 2016 | | # |---------------------------|:---------------------:|--------------------|----------------------| # | Manhattan Rides | $0.556*42876429$ | $0.444*42876429$ | 42876429 | # | Non-Manhattan Rides | $0.618*25731470$ | $0.382*25731470$ | 25731470 | # | | | | | # | total | 39741342 | 28866556 | 68607899 | # + observations = [[0.556*42876429, 0.444*42876429], [0.618*25731470, 0.382*25731470]] N = 68607899 ob1 = observations[0][0]+observations[1][0] ob2 = observations[0][1]+observations[1][1] expectation = 42876429 * 25731470 * ob1 * ob2 chisqstat= lambda N, observations, expectation : N * ((observations[0][0] * observations[1][1] -observations[0][1] * observations[1][0])**2) / expectation print (chisqstat(N, observations, expectation)) # - # # Lower Income Areas FHV.head() acs = pd.read_csv('acs_nyc_2016.csv') acs.head() acsmerge = pd.merge(FHV, acs, how='inner', on='GEOID') acsmerge.head() acsmerge.shape acsmerge.drop([u'Unnamed: 0'], axis = 1, inplace = True) acsmergesort = acsmerge.copy() acsmergesort = acsmergesort[acsmergesort.Median_Household_Income_16!=0] acsmergesort = acsmergesort.sort_values('Median_Household_Income_16') acsmergesort.head() acsmergesort.shape lowerincome = acsmergesort[:1000] lowerincome.reset_index(inplace = True) lowerincome.drop([u'index'], axis = 1, inplace = True) lowerincome.head() higherincome = acsmergesort[1000:] higherincome.reset_index(inplace = True) higherincome.drop([u'index'], axis = 1, inplace = True) higherincome.head() # # merge with NYC map lowerincomespot = lowerincome[['GEOID', 'Median_Household_Income_16']] lowerincomespot.head() NYC = gpd.GeoDataFrame.from_file('nyc tracts/nyu_2451_34513.shp') NYC.crs NYC.head() NYC.rename(columns={'geoid':'GEOID'}, inplace=True) type(lowerincomespot.GEOID[6]) type(NYC.GEOID[6]) NYC.GEOID = NYC.GEOID.astype(int) NYCmergelower = gpd.GeoDataFrame.merge(NYC, lowerincomespot, how='outer', on='GEOID') NYCmergelower.head() NYCmergelower.columns NYCmergelower.Median_Household_Income_16 = NYCmerge.Median_Household_Income_16.fillna(0) NYCmergelower.head() fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('1000 lower income areas, the lower half of all tracts', fontsize=20) NYCmergelower.plot(facecolor='w',edgecolor='k',column='Median_Household_Income_16',cmap='OrRd', scheme='quantiles', ax=ax, legend=True) # # each month total in lower income tracts income_l = lowerincome.drop([u'GEOID', u'12MonthTotal', u'countycode', u'Median_Household_Income_16'], axis = 1) income_l = income_l.T income_l.reset_index(inplace = True) income_l.rename(columns={'index':'month'}, inplace=True) income_l.head() income_l.month = pd.to_datetime(income_l.month) income_l['lowincome'] = income_l.sum(axis=1) income_l.head() # # each month total in higher income tract income_h = higherincome.drop([u'GEOID', u'12MonthTotal', u'countycode', u'Median_Household_Income_16'], axis = 1) income_h = income_h.T income_h.reset_index(inplace = True) income_h.rename(columns={'index':'month'}, inplace=True) income_h.head() # + income_h.month = pd.to_datetime(income_h.month) income_h['highincome'] = income_h.sum(axis=1) income_h.head() # - income_h.month = pd.to_datetime(income_l.month) income_h['highincome'] = income_h.sum(axis=1) income_h.head() # # Plot all/in lower/in higher Together fig = pl.figure(figsize=(8,6)) pl.plot(FHV_all.month, FHV_all.AllArea) pl.plot(income_l.month, income_l.lowincome) pl.plot(income_h.month, income_h.highincome) pl.title('New York City For Hire Vehicle Monthly Usage', fontsize = 18) pl.xlabel('Time Line', fontsize = 13) pl.ylabel('Number of Rides', fontsize = 13) pl.legend(loc='best', fontsize = 12) def PlotTheDistribution (counts1, counts2, norm1, norm2, error1, error2, ylabel): fig = pl.figure(figsize(15,15)) ((counts1) / norm1).plot(kind="bar", color='IndianRed', yerr=[ ((error1) / norm1, (error1) / norm1)], alpha=0.5, label='incomeincome PickUp') ax = ((counts2) / norm2).plot(kind="bar", color='SteelBlue', yerr=[ ((error2) / norm2, (error2) / norm2)], alpha=0.5, label='highincome PickUp') tmp = ax.xaxis.set_ticklabels(FHV_m.month.dt.strftime("%B %Y"), fontsize=20) ax.set_ylabel (ylabel,fontsize=16) ax.set_xlabel ("12 monthes",fontsize=16) ax.set_title ("For Hire Vehicle Rides In lowincome And highincome Area",fontsize=20) pl.legend(fontsize=20) error_l = np.sqrt(income_l.lowincome) error_h = np.sqrt(income_h.highincome) #Unnormalized with error info plot norm_l = 1 norm_h = 1 PlotTheDistribution(income_l.lowincome, income_h.highincome, norm_l, norm_h, error_l, error_h,'Number of Rides') #Normalized with error info plot norm_l = income_l.lowincome.sum() norm_h = income_h.highincome.sum() PlotTheDistribution(income_l.lowincome, income_h.highincome, norm_l, norm_h, error_l, error_h,'Fraction of Rides') #Fraction of Pickups def PlotTheFraction(fraction1, fraction2, error1, error2, timeline): fig = plt.figure(figsize(8,8)) plt.errorbar([0.4], [fraction1], yerr=[error1], fmt='o', label='lowincome PickUp') plt.errorbar([0.2], [fraction2], yerr=[error2], fmt='o', label='highincome PickUp') plt.title(timeline, fontsize=20) plt.xlim(0, 0.5) plt.xticks([]) plt.ylabel("Fraction of normalized rides in low income and high income Area",fontsize=16) plt.legend(fontsize=20) plt.show() # + #Fraction of that non-manhattan rides during not in 2015 #and the fraction that rides in 2016, and the same for the rides in manhattan l2015 = sum(income_l.lowincome[:6]) * 1.0 / norm_l l2016 = sum(income_l.lowincome[6:])*1.0 / norm_l l2015_error = np.sqrt(sum(error_l[:5]**2)) / norm_l l2016_error = np.sqrt(sum(error_l[6:]**2)) / norm_l h2015 = sum(income_h.highincome[:6]) * 1.0 / norm_h h2016 = sum(income_h.highincome[6:])*1.0 / norm_h h2015_error = np.sqrt(sum(error_h[:5]**2)) / norm_h h2016_error = np.sqrt(sum(error_h[6:]**2)) / norm_h print("lowincome: 2015 rides:{0:.4f}, 2016 rides:{1:.4f}, 2015 rides error:{2:.4f}, 2016 rides error:{3:.4f}"\ .format(l2015, l2016, l2015_error, l2016_error)) print("highincome: 2015 rides :{0:.4f}, 2016 rides:{1:.4f}, 2015 rides error:{2:.4f}, 2016 rides error:{3:.4f}"\ .format(h2015, h2016, h2015_error, h2016_error)) # - #Plot the fraction of Riders in 2015 with error PlotTheFraction(l2015, h2015, l2015_error, h2015_error, '2015 rides') #Plot the fraction of Riders in 2016 with error PlotTheFraction(l2016, h2016, l2016_error, h2016_error, '2016 rides') P0mP1 = l2016 - h2016 print ("difference between fraction of 2016 rides in lower income area and rides in higher income area: ", P0mP1) if P0mP1 <= 0: print("In accordance with Null Hypothesis") else: print ("We must check the significance before we reject the Null Hypothesis") # #Chi Square Contigency Table l2016,l2015,h2016,h2015,norm_l,norm_h # | Rides | 2016 | not in 2016 | | # |---------------------------|:---------------------:|--------------------|----------------------| # | lowincome area Rides | $0.614*14897309$ | $0.386*14897309$ | 14897309 | # | highincome area Rides | $0.569*99897900$ | $0.431*99897900$ | 99897900 | # | | | | | # | total | 65988853 | 48806356 | 68607899 | # + observations = [[0.614*14897309, 0.386*14897309], [0.569*99897900, 0.431*99897900]] N = 114795209 ob1 = observations[0][0]+observations[1][0] ob2 = observations[0][1]+observations[1][1] expectation = 14897309 * 99897900 * ob1 * ob2 chisqstat= lambda N, observations, expectation : N * ((observations[0][0] * observations[1][1] -observations[0][1] * observations[1][0])**2) / expectation print (chisqstat(N, observations, expectation)) # - # # plot the monthly each census tract total pickups NYCmerge = gpd.GeoDataFrame.merge(NYC, acsmerge, how='outer', on='GEOID') fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2015 Jul', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='7/1/15',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2015 Aug', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='8/1/15',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2015 Sep', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='9/1/15',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2015 Oct', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='10/1/15',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2015 Nov', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='11/1/15',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2015 Dec', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='12/1/15',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2016 Jan', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='1/1/16',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2016 Feb', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='2/1/16',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2016 Mar', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='3/1/16',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2016 Apr', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='4/1/16',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2016 May', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='5/1/16',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('2016 Jun', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='6/1/16',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) fig, ax = plt.subplots(figsize=(15,15)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_title('12-Month Total Pickups', fontsize=20) NYCmerge.plot(facecolor='w',edgecolor='k',column='12MonthTotal',cmap='OrRd', scheme='quantiles',k=5, ax=ax, legend=True) estamp": 1620408935855, "user_tz": -120, "elapsed": 206385, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="f29dc35d-d013-4a95-d7e7-c5b3919cd3f9" input_ids_cam, attention_masks_cam, labels_cam = prep_input (sentences, labels, max_len, tokenizer_cam) # + colab={"base_uri": "https://localhost:8080/"} id="o-4ArTutx-Cx" executionInfo={"status": "ok", "timestamp": 1620409125282, "user_tz": -120, "elapsed": 395801, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="8f6360f6-c980-4b49-c872-3fcf51474c2e" input_ids_flau, attention_masks_flau, labels_flau = prep_input(sentences,labels, max_len,tokenizer_flau) # + [markdown] id="7XrGAY_zRX--" # # 3.5. Training & Validation Split # Divide up our training randomly select **10%** as a validation set off of the training set. # # While splitting, we used the following parameters: # # # 1. **stratify**: # in this context, stratification means that the train_test_split method returns training and test subsets that have the same proportions of class labels as the input dataset. # 2. **random_state**: # simply sets a seed to the random generator, so that your train-test splits are always deterministic. If you don't set a seed, it is different each time. # + id="nZQ0mB1NyeWa" executionInfo={"status": "ok", "timestamp": 1620409125282, "user_tz": -120, "elapsed": 395790, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} # val_size = 0.1 # tr_inputs_cam, val_inputs_cam, _,_ = train_test_split (input_ids_cam, labels_cam, stratify = labels_cam, # random_state=2020, test_size = val_size) # tr_masks_cam, val_masks_cam, _,_ = train_test_split (attention_masks_cam, labels, stratify = labels, # labels: Preprocess.labels # random_state=2020, test_size = val_size) # tr_inputs_flau, val_inputs_flau, _,_ = train_test_split (input_ids_flau, labels_flau, stratify=labels, # random_state=2020, test_size = val_size) # tr_masks_flau, val_masks_flau, _,_ = train_test_split (attention_masks_flau, labels,stratify=labels_flau, # labels: Preprocess.labels # random_state=2020, test_size = val_size) # + id="SfiE2yDIzuRO" executionInfo={"status": "ok", "timestamp": 1620409125283, "user_tz": -120, "elapsed": 395784, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} # torch.save(tr_inputs_cam, "tr_inputs_cam.pt") # torch.save(val_inputs_cam, "val_inputs_cam.pt") # torch.save(tr_masks_cam, "tr_masks_cam.pt") # torch.save(val_masks_cam, "val_masks_cam.pt") # torch.save(tr_inputs_flau, "tr_inputs_flau.pt") # torch.save(val_inputs_flau, "val_inputs_flau.pt") # torch.save(tr_masks_flau, "tr_masks_flau.pt") # torch.save(val_masks_flau, "val_masks_flau.pt") # + id="mdkITHXh0Ahs" executionInfo={"status": "ok", "timestamp": 1620409125283, "user_tz": -120, "elapsed": 395776, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} # text_input='./' # tr_inputs_cam = torch.load(text_input + "tr_inputs_cam.pt") # val_inputs_cam = torch.load(text_input +"val_inputs_cam.pt") # tr_masks_cam = torch.load(text_input + "tr_masks_cam.pt") # val_masks_cam = torch.load(text_input + "val_masks_cam.pt") # tr_inputs_flau = torch.load(text_input + "tr_inputs_flau.pt") # val_inputs_flau = torch.load(text_input + "val_inputs_flau.pt") # tr_masks_flau = torch.load(text_input + "tr_masks_flau.pt") # val_masks_flau = torch.load(text_input + "val_masks_flau.pt") # + id="4HFg5HwahAij" executionInfo={"status": "ok", "timestamp": 1620409125284, "user_tz": -120, "elapsed": 395770, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} tr_inputs, test_inputs_cam, tr_labels, test_labels_cam = train_test_split(input_ids_cam, labels_cam, stratify=labels_cam, random_state=2020, test_size = 0.2) tr_inputs_cam, val_inputs_cam, train_labels, val_labels = train_test_split(tr_inputs, tr_labels, stratify=tr_labels, random_state=2020, test_size = 0.15) tr_masks, test_masks_cam, tr_masks_labels, _ = train_test_split(attention_masks_cam, labels, stratify=labels, random_state=2020, test_size=0.2) tr_masks_cam, val_masks_cam, u,v = train_test_split(tr_masks, tr_masks_labels, stratify=tr_masks_labels, random_state=2020, test_size=0.15 ) # + id="dGL3n3GUhhGI" executionInfo={"status": "ok", "timestamp": 1620409126548, "user_tz": -120, "elapsed": 397026, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} tr_inputs, test_inputs_flau, tr_labels, test_labels_flau = train_test_split(input_ids_cam, labels_cam, stratify=labels_cam, random_state=2020, test_size = 0.2) tr_inputs_flau, val_inputs_flau, train_labels, val_labels = train_test_split(tr_inputs, tr_labels, stratify=tr_labels, random_state=2020, test_size = 0.15) tr_masks, test_masks_flau, tr_masks_labels, _ = train_test_split(attention_masks_cam, labels, stratify=labels, random_state=2020, test_size=0.2) tr_masks_flau, val_masks_flau, u,v = train_test_split(tr_masks, tr_masks_labels, stratify=tr_masks_labels, random_state=2020, test_size=0.15 ) # + id="y__2u2hzhObl" executionInfo={"status": "ok", "timestamp": 1620409126744, "user_tz": -120, "elapsed": 397215, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} torch.save(tr_inputs_cam, "tr_inputs_cam.pt") torch.save(val_inputs_cam, "val_inputs_cam.pt") torch.save(tr_masks_cam, "tr_masks_cam.pt") torch.save(val_masks_cam, "val_masks_cam.pt") torch.save(test_inputs_cam, "test_inputs_cam.pt") torch.save(test_masks_cam, "test_masks_cam.pt") torch.save(tr_inputs_flau, "tr_inputs_flau.pt") torch.save(val_inputs_flau, "val_inputs_flau.pt") torch.save(tr_masks_flau, "tr_masks_flau.pt") torch.save(val_masks_flau, "val_masks_flau.pt") torch.save(test_inputs_flau, "test_inputs_flau.pt") torch.save(test_masks_flau, "test_masks_flau.pt") # + id="tB_LU5TRhUov" executionInfo={"status": "ok", "timestamp": 1620409126744, "user_tz": -120, "elapsed": 397208, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} text_input='./' tr_inputs_cam = torch.load(text_input + "tr_inputs_cam.pt") val_inputs_cam = torch.load(text_input +"val_inputs_cam.pt") tr_masks_cam = torch.load(text_input + "tr_masks_cam.pt") val_masks_cam = torch.load(text_input + "val_masks_cam.pt") input_ids_test_cam = torch.load(text_input + "test_inputs_cam.pt") attention_masks_test_cam = torch.load(text_input + "test_masks_cam.pt") tr_inputs_flau = torch.load(text_input + "tr_inputs_flau.pt") val_inputs_flau = torch.load(text_input + "val_inputs_flau.pt") tr_masks_flau = torch.load(text_input + "tr_masks_flau.pt") val_masks_flau = torch.load(text_input + "val_masks_flau.pt") input_ids_test_flau = torch.load(text_input + "test_inputs_flau.pt") attention_masks_test_flau = torch.load(text_input + "test_masks_flau.pt") # + [markdown] id="i1O4EyDzCpsS" # # 4. Defining Models to be Fused # # Now, as our data has been preprocessed, cleaned and text data was tokenzied , it is ready to be fed to the models. # - As a first step, first we need to define and configure the models. # # + [markdown] id="rlj0Dz1FW7cM" # # 4.1. RESNet Model for Image Processing. # # In PyTorch, you always need to define a forward method for your neural network model. But you never have to call it explicitly. # Here we are defining our image processing class is subclass of nn.Module and is inheriting all methods. In the super class, nn.Module, there is a __call__ method which obtains the forward function from the subclass and calls it. # + [markdown] id="rnnJlj2Y3iii" # # 4.1.1. Image Processing Model - RESNet50 # # # # + id="QWvMo_nV1XCb" executionInfo={"status": "ok", "timestamp": 1620409126745, "user_tz": -120, "elapsed": 397201, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} from torch.nn import functional as F import torch.nn as nn import pretrainedmodels class SEResnext50_32x4d(nn.Module): def __init__(self, pretrained='imagenet'): super(SEResnext50_32x4d, self).__init__() self.base_model = pretrainedmodels.__dict__["se_resnext50_32x4d"](pretrained=None) if pretrained is not None: self.base_model.load_state_dict( self.base_model.load_state_dict(torch.load(resnet_model_path)) ) self.l0 = nn.Linear(2048, 27) # Applies a linear transformation to the incoming data # batch_size = 2048 def forward(self, image): batch_size, _, _, _ = image.shape # During the training you will get batches of images, # so your shape in the forward method will get an additional batch dimension at dim0: # [batch_size, channels, height, width]. x = self.base_model.features(image) #extracting feature vector from network after feature leaning #This is the flatten vector x = F.adaptive_avg_pool2d(x, 1).reshape(batch_size, -1) #adaptive_avg_pool2d : Kernel size = (input_size+target_size-1) // target_size rounded up #Then the positions of where to apply the kernel are computed as rounded equidistant points between 0 and input_size - kernel_size out = self.l0(x) return out # + id="NaAYA3QY1cbO" executionInfo={"status": "ok", "timestamp": 1620409126747, "user_tz": -120, "elapsed": 397196, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} class Identity(nn.Module): def __init__(self): super(Identity, self).__init__() def forward(self, x): return x # + [markdown] id="vr8ciZGg1Ee4" # # 4.1.2. Instaniating the Image Processing Network # Now we create an instance from the SEResnext50_32x4d class that we defined and load the weights from a pretrained model, since the training is done previously. # + id="ZGGor-mS1zJe" executionInfo={"status": "ok", "timestamp": 1620409127043, "user_tz": -120, "elapsed": 397486, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} #data_path = '/content/drive/My Drive/Rakuten/' img_model = SEResnext50_32x4d(pretrained=None) # img_model.load_state_dict(torch.load(os.path.join(data_path, 'models/RESNET_best_model.pt'))) # img_model.cuda() # + [markdown] id="P9gfXO9k21-o" # # 4.1.3. Prinitng Model's Params # + id="WOiwcopd8m8B" executionInfo={"status": "ok", "timestamp": 1620409127044, "user_tz": -120, "elapsed": 397480, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} img_model.l0 = Identity() # + colab={"base_uri": "https://localhost:8080/"} id="Jb7I6qBq2c0E" executionInfo={"status": "ok", "timestamp": 1620409127045, "user_tz": -120, "elapsed": 397473, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="67ebad89-be4d-41b2-c97c-c8087b449fb4" for param in img_model.parameters(): print(type(param), param.size()) # + [markdown] id="9iqaXH5A3Esl" # # 4.1.4. Model's Params Require No Grads # # These are just regular tensors, with one very special addition: we tell PyTorch that they require a gradient. This causes PyTorch to record all of the operations done on the tensor, so that it can calculate the gradient during back-propagation automatically! # As our model is already trained and weights are assigned, then there is no need to calculate the gradients so no need to send them to the optimizer. # + id="c0XJcLgW8u09" executionInfo={"status": "ok", "timestamp": 1620409127046, "user_tz": -120, "elapsed": 397461, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} # params are iterators which contain model's parameters. Usually passed to the optimizer for params in img_model.parameters(): params.requires_grad = False # + id="vaXULFfzkdB8" executionInfo={"status": "ok", "timestamp": 1620409127047, "user_tz": -120, "elapsed": 397455, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} img_model.out_proj = Identity() # + [markdown] id="JWgGQqbU4nNR" # # Image Data Preparation # + id="4yNKxOJ2-QrA" executionInfo={"status": "ok", "timestamp": 1620409127047, "user_tz": -120, "elapsed": 397447, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} # # Data path # text_data_path = os.path.join('/content/drive/My Drive/Rakuten') # image_data_path = os.path.join('') # + id="SM6bWAoKTsx7" executionInfo={"status": "ok", "timestamp": 1620409127048, "user_tz": -120, "elapsed": 397442, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} def get_img_path(img_id, prd_id, path): pattern = 'image'+'_'+str(img_id)+'_'+'product'+'_'+str(prd_id)+'.jpg' return path + pattern # + colab={"base_uri": "https://localhost:8080/"} id="oYomnrt7wKWw" executionInfo={"status": "ok", "timestamp": 1620409127050, "user_tz": -120, "elapsed": 397437, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="0f74dd31-c516-4cf1-fc94-1c689519e351" print(len(Preprocess.train), len(train)) # + [markdown] id="QzgyQu0aY17J" # # Obtaining & Splitting Images # + colab={"base_uri": "https://localhost:8080/", "height": 236, "referenced_widgets": ["0bada4c2de514c79b0165c51e45fb928", "01c0e73256904e4f91f5fa629b31750f", "8a7c39f4db714559aac80618d68134bf", "2342e27b205e4a8db822280456e24809", "04f3c2a7bfeb45638ff9b7347ccd1947", "086d4b9640154df7b66b328c580a1be5", "512b0f76436d4f2997bf1ac619ecc7e5", "4335f45453064851aeacbeae0d603639"]} id="emmAR45klxoi" executionInfo={"status": "ok", "timestamp": 1620409129009, "user_tz": -120, "elapsed": 399385, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="b5b1fcaa-2dae-48b7-ed36-8989a27290a5" from torch.utils.data import DataLoader, RandomSampler, SequentialSampler # train 65% # validation 15% # test 20% # (original training) => train, test => 80, 20 val_size = 0.2 # (train) => train, val => 85, 15 val_size = 0.15 # train_img = train[['Image_id','Product_id','labels','product']] train_img = Preprocess.train[['Image_id','Product_id','labels','product']] train_img['image_path'] = Preprocess.train.progress_apply(lambda x: get_img_path(x['Image_id'], x['Product_id'], path = os.path.join(image_data_path, 'image_training/')),axis=1) tr_df, test_df, tr_labels, test_labels = train_test_split(train_img, train_img['labels'], random_state=2020, test_size = 0.2, stratify=train_img['labels']) train_df, val_df, train_labels, val_labels = train_test_split(tr_df, tr_labels, random_state=2020, test_size = 0.15, stratify=tr_labels) print("Train: ", len(train_df)) print("Val: ", len(val_df)) print("Test: ", len(test_df)) print ("") # + colab={"base_uri": "https://localhost:8080/"} id="B_h5_e_Gw-M9" executionInfo={"status": "ok", "timestamp": 1620409129009, "user_tz": -120, "elapsed": 399373, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="f2b7e18f-ca16-4508-a813-016ccc8c51c8" print("Original Images Df: ", len(train_img)) print("Train Images DF: " , len(train_df)) print("Validation Images DF: ", len(val_df)) # + id="aIAzQAsCTzBs" executionInfo={"status": "ok", "timestamp": 1620409129010, "user_tz": -120, "elapsed": 399365, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} # print (train_img['image_path'][0]) # + [markdown] id="lRigVJwVBOcz" # # Image Data Augmentation # # We're going to be making use of Pytorch's transforms for preparing the input images to be used by ouur model. # # # # # # * We'll need to make sure the images in the training set and validation set are the same size, so we'll be using transforms.Resize # * We'll also be doing a little data augmentation, trying to improve the performance of our model by forcing it to learn about images at different angles and crops, so we'll randomly crop and rotate the images. # # * we'll make tensors out of the images, as PyTorch works with tensors. # * Finally, we'll normalize the images, which helps the network work with values that may be have a wide range of different values. # # # * We then compose all our chosen transforms. # # It worth mentioning that validation transforms don't have any of the flipping or rotating, as they aren't part of our training set, so the network isn't learning about them # # # # # # # + id="-Togfn7XmpW5" executionInfo={"status": "ok", "timestamp": 1620409129010, "user_tz": -120, "elapsed": 399357, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} input_size = 224 # for Resnt # Applying Transforms to the Data from torchvision import datasets, models, transforms image_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)), transforms.RandomRotation(degrees=15), transforms.RandomHorizontalFlip(), transforms.Resize(size=256), transforms.CenterCrop(size=input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'valid': transforms.Compose([ transforms.Resize(size=256), transforms.CenterCrop(size=input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'test': transforms.Compose([ transforms.Resize(size=256), transforms.CenterCrop(size=input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) } # + [markdown] id="nB9-K4lUNoqT" # # Text Processing Models - BertForSequenceClassification # # Thankfully, the huggingface pytorch implementation includes a set of interfaces designed for a variety of NLP tasks. Though these interfaces are all built on top of a trained BERT model, each has different top layers and output types designed to accomodate their specific NLP task. # # We first want to modify the pre-trained BERT model to give outputs for classification, and then we want to continue training the model on our dataset until that the entire model, end-to-end, is well-suited for our task. # # **BertForSequenceClassification** is one of the current of classes provided for fine-tuning. # # This is the normal BERT model with an added single linear layer on top for classification that we will use as a sentence classifier. As we feed input data, the entire pre-trained BERT model and the additional untrained classification layer is trained on our specific task. # # - Not to forget that Camembet model inherits RobertaModel # + [markdown] id="I_vkDiuRC70z" # # 4.2 CamemBERT Model # + id="imOtgakyCtFe" executionInfo={"status": "ok", "timestamp": 1620409129010, "user_tz": -120, "elapsed": 399329, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} class vec_output_CamembertForSequenceClassification(CamembertModel): config_class = CamembertConfig def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = CamembertModel(config) self.dense = nn.Linear(256*config.hidden_size, config.hidden_size) self.dropout = nn.Dropout(0.1) self.out_proj = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, ): outputs = self.roberta( input_ids, attention_mask = attention_mask, token_type_ids = token_type_ids, position_ids = position_ids, head_mask = head_mask, inputs_embeds=inputs_embeds, # output_attentions=output_attentions, # output_hidden_states=output_hidden_states, ) sequence_output = outputs[0] #(B,256,768) x = sequence_output.view(sequence_output.shape[0], 256*768) # x = sequence_output[:, 0, :] # take <s> token (equiv. to [CLS])-> #(B,768) Image -> (B,2048) x = self.dense(x) # 768 -> 768 feat= torch.tanh(x) logits = self.out_proj(feat) # 768 -> 27 outputs = (logits,) + outputs[2:] #3rd element onwards return outputs,feat # (loss), logits, (hidden_states), (attentions) # + [markdown] id="OtPoFtaqDAj6" # # FlauBERT Model # + id="a16smoYhDCmn" executionInfo={"status": "ok", "timestamp": 1620409129012, "user_tz": -120, "elapsed": 399314, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} num_classes = 27 class vec_output_FlaubertForSequenceClassification(FlaubertModel): config_class = FlaubertConfig def __init__(self, config): super().__init__(config) self.transformer = FlaubertModel(config) self.sequence_summary = SequenceSummary(config) self.init_weights() self.dropout = torch.nn.Dropout(0.1) self.classifier = torch.nn.Linear(config.hidden_size, num_classes) def forward( self, input_ids=None, attention_mask=None, langs=None, token_type_ids=None, position_ids=None, lengths=None, cache=None, head_mask=None, inputs_embeds=None, labels=None, ): transformer_outputs = self.transformer( input_ids, attention_mask=attention_mask, langs=langs, token_type_ids=token_type_ids, position_ids=position_ids, lengths=lengths, cache=cache, head_mask=head_mask, inputs_embeds=inputs_embeds, ) #output = self.dropout(output) output = transformer_outputs[0] vec = output[:,0] #logits dense = self.dropout(vec) #classifier logits = self.classifier(dense) outputs = (logits,) + transformer_outputs[1:] # Keep new_mems and attention/hidden states if they are here return outputs,dense # + [markdown] id="b2krGO8BnVfd" # # Dataset Fusion # + id="qIIQ5-g3gU85" executionInfo={"status": "ok", "timestamp": 1620409129012, "user_tz": -120, "elapsed": 399301, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} # TODO DELELTE IMAGES WITH NO DESCRIPTION # From the preprocesssed file # + id="RRq24YrsnU9X" executionInfo={"status": "ok", "timestamp": 1620409129013, "user_tz": -120, "elapsed": 399289, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} from torch.utils.data import Dataset, DataLoader, Subset from PIL import Image import matplotlib.pyplot as plt import cv2 class FusionDataset(Dataset): def __init__(self, df, inputs_cam, masks_cam, inputs_flau, masks_flau, transform=None, mode='train'): self.df = df self.transform = transform self.mode = mode self.inputs_cam = inputs_cam self.masks_cam = masks_cam self.inputs_flau = inputs_flau self.masks_flau = masks_flau def __len__(self): return len(self.df) def __getitem__(self,idx): im_path = self.df.iloc[idx]['image_path'] img= plt.imread(im_path) #img = cv2.imread(im_path) #img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = Image.fromarray(img) if self.transform is not None: img = self.transform(img) img = img.cuda() input_id_cam = self.inputs_cam[idx].cuda() input_mask_cam = self.masks_cam[idx].cuda() input_id_flau = self.inputs_flau[idx].cuda() input_mask_flau = self.masks_flau[idx].cuda() if self.mode =='test': return img,input_id_cam,input_mask_cam,input_id_flau,input_mask_flau else: labels = torch.tensor(self.df.iloc[idx]['labels']).cuda() return img,input_id_cam,input_mask_cam,input_id_flau,input_mask_flau,labels # + id="O5RRyCzd4XSd" executionInfo={"status": "ok", "timestamp": 1620409129015, "user_tz": -120, "elapsed": 399277, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} a1 = torch.randn(3,10,10) # + id="o_51PvBY4Zpr" executionInfo={"status": "ok", "timestamp": 1620409129015, "user_tz": -120, "elapsed": 399261, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} reduce_dim = nn.Conv1d(in_channels = 10 , out_channels = 1 , kernel_size= 1) # + colab={"base_uri": "https://localhost:8080/"} id="wryxtip-4b0m" executionInfo={"status": "ok", "timestamp": 1620409129015, "user_tz": -120, "elapsed": 399224, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="3887c2ae-2f64-482b-e49f-e3a2b7f4f71a" reduce_dim(a1).view(3,10).shape # + colab={"base_uri": "https://localhost:8080/"} id="hTUa-CbEnluo" executionInfo={"status": "ok", "timestamp": 1620409130852, "user_tz": -120, "elapsed": 401039, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="37dc80e2-93b8-41fa-e9fb-64909c37c59f" print('Using Camembert') tokenizer_cam = CamembertTokenizer.from_pretrained('camembert-base', do_lowercase=False) print('Using Flaubert') tokenizer_flau = FlaubertTokenizer.from_pretrained('flaubert/flaubert_base_cased', do_lowercase=False) # input_ids_test_flau,attention_masks_test_flau = prep_input(test_sentences, labels=None, max_len=max_len,tokenizer = tokenizer_flau) # input_ids_test_cam,attention_masks_test_cam = prep_input(test_sentences , labels=None, max_len=max_len,tokenizer = tokenizer_cam) # + id="e8gijJIM5pQX" executionInfo={"status": "ok", "timestamp": 1620409130853, "user_tz": -120, "elapsed": 401015, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} # print(type(Preprocess.test_sentences)) # print(len(Preprocess.test_sentences)) # + id="HXbLhSkyCilV" executionInfo={"status": "ok", "timestamp": 1620409130854, "user_tz": -120, "elapsed": 401003, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} # Moodels path resnet_model_path = '/content/Rakuten/models/RESNET_baseline_model.pt' camembert_model_path = '/content/Rakuten/models/CamemBERT_best_model_baseline.pt' flaubert_model_path = '/content/Rakuten/models/FlauBERT_best_model_baseline.pt' #my_flau_path = '/content/Rakuten/models/FlauBERT_best_model_description.pt' # + [markdown] id="09NiYnUxfi94" # # Fuse # When using pretrained models, PyTorch sets the model to be unfrozen (will have its weights adjusted) by default # + id="TxW8Ups_nr8O" executionInfo={"status": "ok", "timestamp": 1620409130854, "user_tz": -120, "elapsed": 400979, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} class vector_fusion(nn.Module): def __init__(self): super(vector_fusion, self).__init__() self.img_model = SEResnext50_32x4d(pretrained=None) self.img_model.load_state_dict(torch.load(resnet_model_path)) self.img_model.l0=Identity() for params in self.img_model.parameters(): params.requires_grad=False self.cam_model= vec_output_CamembertForSequenceClassification.from_pretrained( 'camembert-base', # Use the 12-layer BERT model, with an uncased vocab. num_labels = 27, # The number of output labels--2 for binary classification. # You can increase this for multi-class tasks. output_attentions = False, # Whether the model returns attentions weights. output_hidden_states = False,) # Whether the model returns all hidden-states. checkpoint = torch.load(camembert_model_path) self.cam_model.load_state_dict(checkpoint) for param in self.cam_model.parameters(): param.requires_grad=False self.cam_model.out_proj=Identity() self.flau_model = vec_output_FlaubertForSequenceClassification.from_pretrained( 'flaubert/flaubert_base_cased', num_labels = 27, output_attentions = False, output_hidden_states = False,) checkpoint = torch.load(flaubert_model_path) self.flau_model.load_state_dict(checkpoint) for param in self.flau_model.parameters(): param.requires_grad=False self.flau_model.classifier=Identity() self.reduce_dim=nn.Conv1d(in_channels = 2048 , out_channels = 768 , kernel_size= 1) self.reduce_dim2=nn.Conv1d(in_channels = 768 , out_channels = 1 , kernel_size= 1) self.out=nn.Linear(768*3, 27) #gamma # self.w1 = nn.Parameter(torch.zeros(1)) # self.w2 = nn.Parameter(torch.zeros(1)) # self.w3 = nn.Parameter(torch.zeros(1)) def forward(self,img,input_id_cam,input_mask_cam,input_id_flau,input_mask_flau): cam_emb,vec1 =self.cam_model(input_id_cam, token_type_ids=None, attention_mask=input_mask_cam) flau_emb,vec2 =self.flau_model(input_id_flau, token_type_ids=None, attention_mask=input_mask_flau) #Projecting the image embedding to lower dimension img_emb = self.img_model(img) img_emb = img_emb.view(img_emb.shape[0],img_emb.shape[1],1) img_emb = self.reduce_dim(img_emb) img_emb = img_emb.view(img_emb.shape[0],img_emb.shape[1]) ###### bs * 768 #summing up the vectors #text_emb = cam_emb[0] + flau_emb[0] #Bilinear #text_emb = text_emb.view(text_emb.shape[0],1,text_emb.shape[1]) ##### bs * 1 * 768 #Bilinear Pooling #pool_emb = torch.bmm(img_emb,text_emb) ### bs * 768 * 768 #pool_emb = self.reduce_dim2(pool_emb).view(text_emb.shape[0],768) #### bs * 1 * 768 fuse= torch.cat([img_emb,cam_emb[0],flau_emb[0]],axis=1) logits=self.out(fuse) return logits # + colab={"base_uri": "https://localhost:8080/"} id="J1VU1vrSQFzD" executionInfo={"status": "ok", "timestamp": 1620409131571, "user_tz": -120, "elapsed": 401681, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="4604b622-fd75-4206-e089-d864ab5e1b8c" img_model = SEResnext50_32x4d(pretrained=None) img_model.load_state_dict(torch.load(resnet_model_path)) # + colab={"base_uri": "https://localhost:8080/"} id="JoAwxoKvP1rw" executionInfo={"status": "ok", "timestamp": 1620409169218, "user_tz": -120, "elapsed": 439299, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="88f2eada-cd84-41cb-c9ee-db08569e9b1b" cam_model= vec_output_CamembertForSequenceClassification.from_pretrained( 'camembert-base', # Use the 12-layer BERT model, with an uncased vocab. num_labels = 27, # The number of output labels--2 for binary classification. # You can increase this for multi-class tasks. output_attentions = False, # Whether the model returns attentions weights. output_hidden_states = False,) # Whether the model returns all hidden-states. checkpoint = torch.load(camembert_model_path) cam_model.load_state_dict(checkpoint) # + colab={"base_uri": "https://localhost:8080/", "height": 237, "referenced_widgets": ["2af2f799d8af4561a839927f10e3f113", "1acc380313f3413bb13a5f9f1b2557df", "51b0ea56890b4abe8710fa7d7c47fa8b", "937d9258cdb14df0aad138843f8e28d1", "9a12a131ea584175854f5681d487c7a3", "25900f07c0af4205b0ae2aa2a075cc3a", "c782cbe611ac4400bb02d9c1ecd9ac05", "07592f54764a4735b7e538a3ccafce54", "f3266f39bb914392ab070e2518404995", "45602bfe1457424f8e6fc50e5d659c50", "cb6fbae3998f4527a462f11695228007", "78e7ace57f9848fd9f23fcd2f7d36e21", "760bee56cb0240e6ab2ddead28c1be43", "fd952588ae1e40e3bdeeb5c6153d2054", "a849e0c18da9401c817e4ba6886b59c1", "330db9164d8a48b89b470631ecf7a0cc"]} id="FZfncq4PO_EY" executionInfo={"status": "ok", "timestamp": 1620409207428, "user_tz": -120, "elapsed": 477488, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="cbbda903-9ae8-4ae2-8ede-3919ba743684" flau_model = vec_output_FlaubertForSequenceClassification.from_pretrained( 'flaubert/flaubert_base_cased', num_labels = 27, output_attentions = False, output_hidden_states = False,) checkpoint = torch.load(flaubert_model_path) flau_model.load_state_dict(checkpoint) # + [markdown] id="eU5pHNZKNKr3" # # Instantiation & Training of Fusion Model # + colab={"base_uri": "https://localhost:8080/"} id="8Qpj2jQYu0P-" executionInfo={"status": "ok", "timestamp": 1620409230554, "user_tz": -120, "elapsed": 500599, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="70f9af82-5116-4a75-8a21-d5dd6bf3d119" model = vector_fusion() # + id="GN9HIANRu1_R" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1620409231603, "user_tz": -120, "elapsed": 501633, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="60fb9d18-c08f-47e5-eb7f-b31dff39d17e" model.cuda() # + [markdown] id="wbah-djKPyRB" # # Fuse Input Data # + id="ZZBQFQnFSn6L" executionInfo={"status": "ok", "timestamp": 1620409231603, "user_tz": -120, "elapsed": 501621, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} train_dataset = FusionDataset (train_df, tr_inputs_cam, tr_masks_cam, tr_inputs_flau, tr_masks_flau, transform = image_transforms['train']) val_dataset = FusionDataset (val_df, val_inputs_cam, val_masks_cam, val_inputs_flau, val_masks_flau, transform = image_transforms['valid']) test_dataset = FusionDataset (test_df, input_ids_test_cam, attention_masks_test_cam, input_ids_test_flau, attention_masks_test_flau , transform = image_transforms['test']) # + colab={"base_uri": "https://localhost:8080/"} id="yPY9mTkrs2Jl" executionInfo={"status": "ok", "timestamp": 1620409231604, "user_tz": -120, "elapsed": 501613, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="49f9a938-b2a6-4f8e-9e40-fe5182c91326" print(len(train_df), len(tr_inputs_cam), len(tr_inputs_flau)) # + [markdown] id="Uh2PccgCXJeI" # # Data Loaders # # We need to use the DataLoaders to create iterable objects for us to work with. We tell it which datasets we want to use, give it a batch size, and shuffle the data # + id="nfFhYJYpSr05" executionInfo={"status": "ok", "timestamp": 1620409231604, "user_tz": -120, "elapsed": 501605, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} batch_size = 32 #increase batch size to reduce the noise train_dataloader = DataLoader(train_dataset,batch_size=batch_size,shuffle=True) validation_dataloader = DataLoader(val_dataset,batch_size=batch_size,shuffle=False) test_dataloader = DataLoader(test_dataset,batch_size=batch_size,shuffle=False) # + id="Y-gP7nVnSwAt" executionInfo={"status": "ok", "timestamp": 1620409231604, "user_tz": -120, "elapsed": 501597, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} optimizer = AdamW( model.parameters(), lr = 2e-4, # args.learning_rate - default is 5e-5, our notebook had 2e-5 eps = 1e-8, # args.adam_epsilon - default is 1e-8. weight_decay= 0.001 ) # + id="PgoWU2g6S1CZ" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1620409231604, "user_tz": -120, "elapsed": 501589, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="6647e9c9-5fd5-49b0-acc5-a82c70dcf7af" def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) count_parameters(model) # + id="0CRNzbwCS6vU" executionInfo={"status": "ok", "timestamp": 1620409231605, "user_tz": -120, "elapsed": 501581, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} from transformers import get_linear_schedule_with_warmup # Number of training epochs. The BERT authors recommend between 2 and 4. # We chose to run for 4, but we'll see later that this may be over-fitting the # training data. epochs = 6 # Total number of training steps is [number of batches] x [number of epochs]. # (Note that this is not the same as the number of training samples). total_steps = len(train_dataloader) * epochs # Create the learning rate scheduler. scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, # Default value in run_glue.py num_training_steps = total_steps) # + id="en6CE-_8S-Vw" executionInfo={"status": "ok", "timestamp": 1620409231605, "user_tz": -120, "elapsed": 501573, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} import torch.nn as nn loss_criterion = nn.CrossEntropyLoss() # + id="MLg8d2UmTBLp" executionInfo={"status": "ok", "timestamp": 1620409231605, "user_tz": -120, "elapsed": 501566, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} def flat_accuracy(preds, labels): pred_flat = np.argmax(preds, axis=1).flatten() labels_flat = labels.flatten() return np.sum(pred_flat == labels_flat) / len(labels_flat) # + id="kY7xbQXqTHUF" colab={"base_uri": "https://localhost:8080/", "height": 0} executionInfo={"status": "ok", "timestamp": 1620417326074, "user_tz": -120, "elapsed": 8596027, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="2a15247c-bff9-4140-b4aa-f3cda052ad70" from sklearn.metrics import f1_score seed_val = 42 #seed_val = 56 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) # We'll store a number of quantities such as training and validation loss, # validation accuracy, and timings. training_stats = [] train_loss_values = [] val_loss_values = [] logits_values =[] ############ total_train_accuracy = 0 avg_train_accuracy = 0 train_accuracy_values = [] val_accuracy_values = [] ########## # Measure the total training time for the whole run. total_t0 = time.time() # For each epoch... for epoch_i in range(0, epochs): # ======================================== # Training # ======================================== # Perform one full pass over the training set. print("") print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') #tr and val # vec_output_tr = [] # vec_output_val =[] # Measure how long the training epoch takes. t0 = time.time() # Reset the total loss for this epoch. total_train_loss = 0 total_train_accuracy = 0 predictions=[] true_labels=[] # Put the model into training mode. Don't be mislead--the call to # `train` just changes the *mode*, it doesn't *perform* the training. # `dropout` and `batchnorm` layers behave differently during training # vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch) model.to(device) best_f1 = 0 model.train() # For each batch of training data... for step, batch in (enumerate(train_dataloader)): # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using the # `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels # return img,input_id_cam,input_mask_cam,input_id_flau,input_mask_flau b_img = batch[0].to(device) b_input_id_cam = batch[1].to(device) b_input_mask_cam = batch[2].to(device) b_input_id_flau = batch[3].to(device) b_input_mask_flau = batch[4].to(device) b_labels = batch[5].to(device) model.zero_grad() #set the gradients to zero before starting to do backpropragation because PyTorch accumulates # the gradients on subsequent backward passes logits = model(b_img, b_input_id_cam , b_input_mask_cam, b_input_id_flau, b_input_mask_flau) # 27 #Defining the loss loss = loss_criterion(logits, b_labels) #saving the features_tr # vec = vec.detach().cpu().numpy() # vec_output_tr.extend(vec) # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. total_train_loss += loss.item() #------------------------------------------------------- # Move logits and labels to CPU logits = logits.detach().cpu().numpy() # Move logits and labels to CPU predicted_labels=np.argmax(logits,axis=1) predictions.extend(predicted_labels) label_ids = b_labels.to('cpu').numpy() true_labels.extend(label_ids) total_train_accuracy += flat_accuracy(logits, label_ids) #------------------------------------------------------- # Perform a backward pass to calculate the gradients. loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the "update rule"--how the parameters are # modified based on their gradients, the learning rate, etc. optimizer.step() # Update the learning rate. scheduler.step() # ------------------------------------------------------ avg_train_accuracy = total_train_accuracy / len(train_dataloader) print("") print("Training Accuracy: {}".format(avg_train_accuracy)) train_accuracy_values.append(avg_train_accuracy) ###################################################################################### # Calculate the average loss over all of the batches. avg_train_loss = total_train_loss / len(train_dataloader) train_loss_values.append(avg_train_loss) #-- move 2 lines up (newly added code block) # Measure how long this epoch took. training_time = format_time(time.time() - t0) print(" Average training loss: {} ".format(avg_train_loss)) print(" Training epcoh took: {:} ".format(training_time)) # ======================================== # Validation # ======================================== # After the completion of each training epoch, measure our performance on # our validation set. print("") print("Running Validation...") t0 = time.time() # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Tracking variables total_eval_accuracy = 0 total_eval_loss = 0 nb_eval_steps = 0 predictions=[] true_labels=[] # Evaluate data for one epoch for batch in (validation_dataloader): # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using # the `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_img = batch[0].to(device) b_input_id_cam = batch[1].to(device) b_input_mask_cam = batch[2].to(device) b_input_id_flau = batch[3].to(device) b_input_mask_flau = batch[4].to(device) b_labels = batch[5].to(device) # Tell pytorch not to bother with constructing the compute graph during # the forward pass, since this is only needed for backprop (training). with torch.no_grad(): # Forward pass, calculate logit predictions. # token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification # Get the "logits" output by the model. The "logits" are the output # values prior to applying an activation function like the softmax. logits = model(b_img,b_input_id_cam ,b_input_mask_cam,b_input_id_flau,b_input_mask_flau) #new #defining the val loss loss = loss_criterion(logits, b_labels) # Accumulate the validation loss. total_eval_loss += loss.item() # Move logits and labels to CPU logits = logits.detach().cpu().numpy() # Move logits and labels to CPU predicted_labels=np.argmax(logits,axis=1) predictions.extend(predicted_labels) label_ids = b_labels.to('cpu').numpy() true_labels.extend(label_ids) ########################################################################## logits_values.append(predicted_labels) ########################################################################## #saving the features_tr # vec = vec.detach().cpu().numpy() # vec_output_val.extend(vec) # Calculate the accuracy for this batch of test sentences, and # accumulate it over all batches. total_eval_accuracy += flat_accuracy(logits, label_ids) # Report the final accuracy for this validation run. avg_val_accuracy = total_eval_accuracy / len(validation_dataloader) #-------------------------------- val_accuracy_values.append(avg_val_accuracy) #-------------------------------- print(" Accuracy: {}".format(avg_val_accuracy)) # Calculate the average loss over all of the batches. avg_val_loss = total_eval_loss / len(validation_dataloader) #----------------------------- val_loss_values.append(avg_val_loss) # Measure how long the validation run took. validation_time = format_time(time.time() - t0) print(" Validation Loss: {}".format(avg_val_loss)) print(" Validation took: {:}".format(validation_time)) print("Validation F1-Score: {}".format(f1_score(true_labels,predictions,average='macro'))) curr_f1=f1_score(true_labels,predictions,average='macro') if curr_f1 > best_f1: best_f1=curr_f1 torch.save(model.state_dict(), '/content/drive/My Drive/Rakuten/models/baseline_final_fusion.pt') # np.save('best_vec_train_model_train.npy',vec_output_tr) # np.save('best_vec_val.npy',vec_output_val) # Record all statistics from this epoch. # training_stats.append( # { # 'epoch': epoch_i + 1, # 'Training Loss': avg_train_loss, # 'Valid. Loss': avg_val_loss, # 'Valid. Accur.': avg_val_accuracy, # 'Training Time': training_time, # 'Validation Time': validation_time # } # ) print("") print("Training complete!") print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0))) print() plt.plot(np.array(train_loss_values), 'r', label='train loss') plt.plot(np.array(val_loss_values), 'b', label='val loss' ) plt.legend() plt.title('Loss Curve') plt.show() print() plt.plot(np.array(train_accuracy_values), 'r', label='train accuracy') plt.plot(np.array(val_accuracy_values), 'b', label='val accuracy' ) plt.legend() plt.title('Train Curve') plt.show() #print(logits_values) # + id="1ptp-vlMkCx0" executionInfo={"status": "ok", "timestamp": 1620417326077, "user_tz": -120, "elapsed": 8596022, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} model_path = '/content/drive/My Drive/Rakuten/models/baseline_final_fusion.pt' # + colab={"base_uri": "https://localhost:8080/"} id="EjQ2KpIIkHAF" executionInfo={"status": "ok", "timestamp": 1620417339153, "user_tz": -120, "elapsed": 8609089, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="7c8278bb-e98a-4aa4-95f9-8deaad4dd14e" checkpoint = torch.load(model_path) model.load_state_dict(checkpoint) # A state_dict is simply a Python dictionary object # that maps each layer to its parameter tensor # + id="Y_VJZ6BCkIXi" executionInfo={"status": "ok", "timestamp": 1620417339154, "user_tz": -120, "elapsed": 8609082, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} import matplotlib.pyplot as plt from sklearn.metrics import f1_score def predict_pyt(model, prediction_dataloader): """ model: pytorch model prediction_dataloader: DataLoader object for which the predictions has to be made. return: predictions: - Direct predicted labels softmax_logits: - logits which are normalized with softmax on output""" print("") print("Running Testing...") t0 = time.time() # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Tracking variables total_eval_accuracy = 0 total_eval_loss = 0 nb_eval_steps = 0 predictions=[] true_labels=[] logits_values =[] val_accuracy_values = [] val_loss_values = [] total_t0 = time.time() # Evaluate data for one epoch for batch in (prediction_dataloader): # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using # the `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_img = batch[0].to(device) b_input_id_cam = batch[1].to(device) b_input_mask_cam = batch[2].to(device) b_input_id_flau = batch[3].to(device) b_input_mask_flau = batch[4].to(device) b_labels = batch[5].to(device) # Tell pytorch not to bother with constructing the compute graph during # the forward pass, since this is only needed for backprop (training). with torch.no_grad(): # Forward pass, calculate logit predictions. # token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification # Get the "logits" output by the model. The "logits" are the output # values prior to applying an activation function like the softmax. logits = model(b_img,b_input_id_cam ,b_input_mask_cam,b_input_id_flau,b_input_mask_flau) #new #defining the val loss loss = loss_criterion(logits, b_labels) # Accumulate the validation loss. total_eval_loss += loss.item() # Move logits and labels to CPU logits = logits.detach().cpu().numpy() # Move logits and labels to CPU predicted_labels=np.argmax(logits,axis=1) predictions.extend(predicted_labels) label_ids = b_labels.to('cpu').numpy() true_labels.extend(label_ids) ########################################################################## logits_values.append(predicted_labels) ########################################################################## # Calculate the accuracy for this batch of test sentences, and # accumulate it over all batches. total_eval_accuracy += flat_accuracy(logits, label_ids) # Report the final accuracy for this validation run. avg_val_accuracy = total_eval_accuracy / len(prediction_dataloader) val_accuracy_values.append(avg_val_accuracy) #-------------------------------- print(" Accuracy: {}".format(avg_val_accuracy)) # Calculate the average loss over all of the batches. avg_val_loss = total_eval_loss / len(prediction_dataloader) #----------------------------- val_loss_values.append(avg_val_loss) # Measure how long the validation run took. validation_time = format_time(time.time() - t0) print(" Test Loss: {}".format(avg_val_loss)) print(" Test took: {:}".format(validation_time)) print("Test F1-Score: {}".format(f1_score(true_labels,predictions,average='macro'))) curr_f1=f1_score(true_labels,predictions,average='macro') print("") print("Testing complete!") #print("Total testing took {:} (h:mm:ss)".format(format_time(time.time()-total_t0))) print() plt.plot(np.array(val_accuracy_values), 'r', label='Test accuracy') plt.legend() plt.title('Test Curve') plt.show() print() print('DONE') # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="-KcbbPdzkKmh" executionInfo={"status": "ok", "timestamp": 1620417678315, "user_tz": -120, "elapsed": 8948235, "user": {"displayName": "Sara Fattouh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjGrdvZo2x_8K8abskRubIgMK3SdtS4cG-aQE01nA=s64", "userId": "09360594903764573322"}} outputId="2693594e-ba29-49b6-a45e-d87a4532031b" predict_pyt(model, test_dataloader)
85,788
/model-house-price/training_models.ipynb
29091f544ea1dd750634c751568fa327412d07d9
[]
no_license
rarafon/ml
https://github.com/rarafon/ml
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
101,445
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np from sklearn.mixture import GaussianMixture import pickle from matplotlib import pyplot as plt import matplotlib.cm as cm from GMM import sample_GMM, GMM_distribution # - def plot_GMM(dataset, save_path): figure, axes = plt.subplots(nrows=1, ncols=1, figsize=(4.5, 4.5)) ax = axes ax.set_aspect('equal') ax.axis('on') ax.set_title('GMM') x = dataset.data['samples'] targets = dataset.data['labels'] axes.scatter(x[:, 0], x[:, 1], marker='.', c=cm.Set1(targets.astype(float) / 2.0 / 2.0), alpha=0.3) plt.tight_layout() # plt.savefig(save_path, transparent=True, bbox_inches='tight') plt.show() # # Generate Gaussian mixture data if __name__ == '__main__': means = map(lambda x: np.array(x), [[0, 0], [1, 1], [-1, -1], [1, -1], [-1, 1]]) means = list(means) std = 0.1 variances = [np.eye(2) * std for _ in means] priors = [1.0 / len(means) for _ in means] gaussian_mixture = GMM_distribution(means=list(means), variances=variances, priors=priors) dataset = sample_GMM(10000, means, variances, priors, sources=('features', )) save_path = './gmm_data.pdf' plot_GMM(dataset, save_path) dataset.data['samples'].shape dataset.data['labels'].shape dataset.data['density'].shape with open('gmm_data.pkl', 'wb') as f: pickle.dump(dataset, f) # # Gaussian mixture components gm = GaussianMixture(n_components=5, random_state=0).fit(dataset.data['samples']) gm.means_ gm.predict([[0, 0], [12, 3]]) gm.covariances_ cribe() # + colab={"base_uri": "https://localhost:8080/"} id="AGeB8Mxd8jGw" outputId="05a8f65c-6154-4275-d19f-edbab5d73aac" cars.info() # + [markdown] id="3k99M8gvgoQB" # ### Finding the two instances that we are going to predict the price in the data # + id="0a3HxqYVLm6R" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="39eadb44-4b1b-498b-e7f1-6ed8d736ed09" # prediction 1 pred1 = cars[cars['VEHICLE ID'] == int(13229)] pred1.head() # + id="HjiLIvIbL149" colab={"base_uri": "https://localhost:8080/", "height": 666} outputId="1644d54d-6a5a-49e3-b92c-c2abcd6c2ee7" # prediction 2 pred2 = cars[cars['VEHICLE ID'] == int(18369)] pred2.head() # + [markdown] id="mkiqWRTgNe0u" # # Step 2: Data Visualization # + colab={"base_uri": "https://localhost:8080/", "height": 568} id="F2ROWf88MyXG" outputId="637eb3ad-182b-4c74-e71a-5367c2922862" plt.figure(figsize=(25,10)) plt.subplot(1,2,1) plt.title('Car Price Distribution Plot') sns.distplot(cars['SOLD PRICE']) plt.subplot(1,2,2) plt.title('Car Price Spread') sns.boxplot(y=cars['SOLD PRICE']) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="aASHRN_ZNrAR" outputId="9d18bad5-8290-467f-9ecb-dd9e2fbeee63" print(cars['SOLD PRICE'].describe(percentiles = [0.25,0.50,0.6, 0.75,0.8,0.90, 0.95, 1])) # + [markdown] id="PHh6xprZQCo0" # ## Inference : # - The plot seemed to be right-skewed, meaning that the most prices in the dataset are low(Below 43000). # - There is a significant difference between the mean and the median of the price distribution. # - The data points are far spread out from the mean, which indicates a high variance in the car prices.(80% of the prices are below 47000, whereas the remaining 100% are between 57000 to 530000.) # + [markdown] id="kIEyIrWaSL37" # ## 3.1 Visualizing Categorial Data # + colab={"base_uri": "https://localhost:8080/"} id="-H1sNJ5BSiSy" outputId="7e340014-de3d-4c2b-b23e-6954330ba78e" cars.columns # + colab={"base_uri": "https://localhost:8080/", "height": 368} id="BTcy4m5EKWfi" outputId="2678ac0d-5ca5-4978-aef6-d41e2397fa37" plt.figure(figsize=(25, 6)) plt.subplot(1,3,1) dic = {} for brand in cars['BRAND NAME']: if brand not in dic: dic[brand] = 1 else: dic[brand] += 1 plt.bar(range(len(dic)), list(dic.values()), align='center', color=['black', 'orange', 'green', 'blue', 'cyan', 'yellow', 'red']) plt.xticks(range(len(dic)), list(dic.keys())) plt.title('Brand Histogram') plt.xlabel('Brand Name') plt.ylabel('Frequency of Brand Name') plt.subplot(1, 3, 2) dic1 = {} for fuel in cars['FUEL']: if fuel not in dic1: dic1[fuel] = 1 else: dic1[fuel] += 1 plt.bar(range(len(dic1)), list(dic1.values()), align='center', color=['red', 'blue', 'green']) plt.xticks(range(len(dic1)), list(dic1.keys())) plt.title('Fuel Histogram') plt.xlabel('Fuel Type') plt.ylabel('Frequency of Fuel TYpe') plt.subplot(1,3, 3) dic2 = {} for body in cars['BODYTYPE']: if body not in dic2: dic2[body] = 1 else: dic2[body] += 1 plt.bar(range(len(dic2)), list(dic2.values()), align='center', color=['yellow', 'red']) plt.xticks(range(len(dic2)), list(dic2.keys())) plt.title('Car Type Histogram') plt.xlabel('Car Type') plt.ylabel('Frequency of Car Type') plt.show() # + [markdown] id="WIiBZzL24_fm" # **Previous 3 graph Conclusions** # - Nissan is the most sold brand # - Petrol Cars are sold the most # - SUVs and Crossovers are the favorite cars # + colab={"base_uri": "https://localhost:8080/"} id="W90U-clJSpFe" outputId="562f9ba5-1f85-4646-b984-c363829a399c" len(cars['VEHICLE ID'].unique()) # + [markdown] id="_dUGnHRuAbdb" # ## Price Variance in respect to CONDITION GRADE # + colab={"base_uri": "https://localhost:8080/", "height": 674} id="pGO36VWagZyF" outputId="1a84042e-1044-4dc1-f715-c3a01d3ff6a8" # comparison of SOLD PRICE vs condition plt.figure(figsize= (15, 10)) sns.boxplot(x = cars['CONDITION GRADE'], y = cars['SOLD PRICE'], palette= ('cubehelix')) print("Number of cars with excellent condition in the dataset", len(cars[cars['CONDITION GRADE'] == 1])) print("Number of cars with good condition in the dataset", len(cars[cars['CONDITION GRADE'] == 2])) print("Number of cars with average condition in the dataset", len(cars[cars['CONDITION GRADE'] == 3])) print("Number of cars with bad condition in the dataset", len(cars[cars['CONDITION GRADE'] == 4])) # + [markdown] id="DG3mAVMvoF1u" # **Inference** # - Car with excellent condition have high price variance and resale value # - They are more instances avaiable for cars with excellent conditions in the datasets # + [markdown] id="jPmFNoUByP1a" # # Resale Price change in respect to Country, Fuel and Body type # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="jAD_cSUJyVbr" outputId="9e973c01-55f7-426e-fc5a-16a0515b4e02" plt.figure(figsize=(25, 12)) df = pd.DataFrame(cars.groupby(['COUNTRYOFORIGIN_ADJ'])['SOLD PRICE'].mean().sort_values(ascending = False)) df.plot.bar() plt.title('country vs Average Price') df = pd.DataFrame(cars.groupby(['FUEL'])['SOLD PRICE'].mean().sort_values(ascending = False)) df.plot.bar() plt.title('Fuel Type vs Average Price') df = pd.DataFrame(cars.groupby(['BODYTYPE'])['SOLD PRICE'].mean().sort_values(ascending = False)) df.plot.bar() plt.title('Body Type vs Average Price') plt.show() # + [markdown] id="8N4uk3fZzcQe" # **Inference** # - Austria is where most expensive car is sold in average however it does not mean, Austria is necessary is the most expensive company to buy resale car, however it means number of expensive car sold in Austrial is rare but dominat in the dataset. # - Electric/Hybrid have highest resale price compare to Diesel and Gas. # - Average Price of Sedan is higher than SUV/Crossover in the dataset. # # + [markdown] id="6Esfg7GP0co4" # ## 3.2 Step Visualising numerical data # # + colab={"base_uri": "https://localhost:8080/"} id="GuT95ynv09xo" outputId="7f45376e-bf05-4845-d82b-a0cd0037b06c" cars.columns # + colab={"base_uri": "https://localhost:8080/", "height": 501} id="9CLSX5FVyjhI" outputId="51ad1e8a-cd83-4458-9468-0ca00a72c1c6" def numerical_plot(x,y,z): # sns.set(rc={'figure.figsize':(20,10)}) g = sns.pairplot(cars, x_vars=[x,y,z], y_vars='SOLD PRICE',size=4, aspect=1, kind='scatter', height= 5) g.fig.set_figwidth(21) g.fig.set_figheight(8) # sns.set(rc={'figure.figsize':(20,10)}) numerical_plot('ODOMETER (KM)', 'MSRP', 'WEIGHT') plt.show() # + [markdown] id="B5JXzCUE3KQc" # **Inference** # - resale PRICE drops as ODOMETER increases # - resale price increases as MSRP increases # - resale price increases as Weight increases
8,698
/Generator - GRT Init - Grid.ipynb
4d37b5212deb51c1a0eb419ecf9a31ce9deeea9e
[]
no_license
anonsub/GMR
https://github.com/anonsub/GMR
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
24,615
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # -*- coding: utf-8 -*- """ Created on Tue May 26 18:53:11 2020 @author: pp9596 """ # General imports from datetime import datetime, timedelta import numpy as np import pandas as pd import os, sys, gc, time, warnings, pickle, psutil, random from multiprocessing import Pool # Multiprocess Runs import lightgbm as lgb # hyperopt set-up from functools import partial from hyperopt import hp, tpe, space_eval, STATUS_OK, Trials from hyperopt.fmin import fmin # custom imports import sys import s3fs from data_helper import seed_everything from data_helper import demand, create_dt, create_static_fea, create_dynamic_fea, create_ds from model_helper import run_model, save_fin_sub, get_sum from WRMSSEEvaluator import WRMSSEEvaluator, WRMSSEForLightGBM from aws_helper import write_csv_s3, read_csv_s3 warnings.filterwarnings('ignore') # read statistics dataset from sys import platform if platform == "linux" or platform == "linux2": DATA_PATH = "/home/ec2-user/m5/data" aux_model_path = r"/home/ec2-user/m5/python/aux_model" sub_path = r"/home/ec2-user/m5/submission" META_DATA_PATH = "/home/ec2-user/m5/data/meta_data" elif platform == "win32": DATA_PATH = r"C:\Users\PP9596\Documents\Bitbucket\M5\data" aux_model_path = r"C:\Users\PP9596\Documents\Bitbucket\M5\python\aux_model" sub_path = r"C:\Users\PP9596\Documents\Bitbucket\M5\submission" ########################### Helpers ##################################################### ## Multiprocess Runs def df_parallelize_run(func, t_split): num_cores = np.min([N_CORES,len(t_split)]) pool = Pool(num_cores) df = pd.concat(pool.map(func, t_split), axis=1) pool.close() pool.join() return df def get_valid_data(validation_flag=False): ''' Get dataset for evaluation ''' if validation_flag: train_df =pd.read_csv(os.path.join(DATA_PATH, "sales_train_validation.csv")) else: train_df =pd.read_csv(os.path.join(DATA_PATH, "sales_train_evaluation.csv")) calendar = pd.read_csv(os.path.join(DATA_PATH, "calendar.csv")) calendar["date"] = pd.to_datetime(calendar["date"]) prices = pd.read_csv(os.path.join(DATA_PATH, "sell_prices.csv")) return train_df, calendar, prices def gen_eval_obj(validation_flag=True, tr_start_data=120, tr_end_date=1913): # load data for estimator train_df, calendar, prices = get_valid_data(validation_flag) train_df.sort_values("id", inplace = True) train_df.reset_index(drop=True, inplace = True) # create WRMSSE object # Load train fold data for WRMSSEE catcol = ['id', 'item_id', 'dept_id', 'cat_id', 'store_id', 'state_id'] tr_numcol = catcol + [f"d_{i}" for i in range(tr_start_data,tr_end_date+1)] vl_numcol = [f"d_{i}" for i in range(tr_end_date+1, tr_end_date+29)] # train_fold_df = train_df.iloc[:, :-28] # valid_fold_df = train_df.iloc[:, -28:] train_fold_df = train_df[tr_numcol] valid_fold_df = train_df[vl_numcol] # load WRMSSEE class evaluator = WRMSSEEvaluator(train_fold_df, valid_fold_df, calendar, prices) valid_col = valid_fold_df.columns valid_col = valid_col.insert(0, "id") return evaluator, valid_col, train_df["id"] # Seed everything SEED = 1010 # We want all things seed_everything(SEED) # to be as deterministic # + ########################### Model params ###################################################### VER = 24 # Our model version VERSION_U = 4 # Uncertainity version N_CORES = psutil.cpu_count() # Available CPU cores # LGB Parameter lgb_params = {'boosting_type': 'gbdt', 'objective': 'poisson', 'learning_rate': 0.07, 'force_row_wise': True, 'metric': 'rmse', 'n_estimators': 4000, 'num_leaves': 2**6-1, 'min_child_samples': 2**5-1, 'subsample': 0.7, 'subsample_freq': 1, 'colsample_bytree': 0.6, 'lambda_l2': 0.9, 'nthread' : 12} # lgb_params_logit = lgb_params # lgb_params_logit['objective'] ='binary' #LIMITS and const TARGET = 'sales' # Our target P_HORIZON = 28 # Prediction horizon USE_AUX = False # Use or not pretrained models max_lags = 60 tagg = 1 END_TRAIN = 1913 # End day of our train set # - def get_pred_loss(evaluator, val_id, sub_fin_dict, Validation_Sum=None): # generate submission sub_fin = pd.DataFrame() for k in sub_fin_dict.keys(): sub_fin = sub_fin.append(sub_fin_dict[k]) val_sum = get_sum(sub_fin, PRED_DURATION=28, inc_eval=False) val_id = pd.DataFrame(val_id) val_id["id"] = val_id["id"].str.replace("evaluation$", "validation") ix = sub_fin.id.str.contains("validation") sub_fin_val = sub_fin[ix] sub_fin_val.reset_index(drop=True, inplace = True) sub_fin_val = val_id.merge(sub_fin_val, on= "id", copy = False) # sub_fin.sort_values("id", inplace = True) # sub_fin.reset_index(drop=True, inplace = True) sub_fin_val.columns = valid_col # RMSSE loss function loss = evaluator.score(sub_fin_val.loc[:,valid_col[1:]]) print("Test WRMSSE Loss : ", loss) # Projected loss # Split & constant test if Validation_Sum is None: Validation_Sum = 1231764 TOTAL_SUM = sum(sub_fin_val.loc[:,valid_col[1:]].sum()) Factor = Validation_Sum/TOTAL_SUM sub_fin_val_test = sub_fin_val.loc[:,valid_col[1:]]*Factor ploss = evaluator.score(sub_fin_val_test.loc[:,valid_col[1:]]) print("Test WRMSSE Loss : ", ploss) return loss, ploss, val_sum loss_summary =[] START_TRAIN = 150 # 150 1183 dt, train_cols, train_mask,cat_feats = create_ds(START_TRAIN=START_TRAIN, END_TRAIN=END_TRAIN, stype=None) # + loss_summary =[] START_TRAIN = 150 # 150 1183 # We can skip some rows (Nans/faster training) # LGB Modelling lgb_params = {'boosting_type':'gbdt', 'objective': 'poisson', 'learning_rate': 0.04, 'force_row_wise': True, 'metric': 'rmse', 'n_estimators': 4000, 'num_leaves': 2**14-1, 'min_child_samples': 2**15-1, # 15 'subsample': 0.7, 'subsample_freq': 1, 'colsample_bytree': 0.6, #0.6, 'lambda_l2': 0.9, 'nthread' : 12} # Run at catagory level # STYPE = ['Intermittent', 'Lumpy', 'Erratic', 'Smooth'] evaluator, valid_col, val_id = gen_eval_obj(validation_flag=False, tr_start_data=1, tr_end_date=END_TRAIN) sub_fin_dict={} get_level_sum = [] STYPE = [None] for i, s in enumerate(STYPE): print("Processing for type - ", s) dt, train_cols, train_mask,cat_feats = create_ds(START_TRAIN=START_TRAIN, END_TRAIN=END_TRAIN, stype=s) print("Number of records - ", dt.shape) create_dynamic_fea(dt, lags=[7, 14], wins=[7, 14]) modelname = "estimator_t1_TIME_FULL_DS_VER_"+ str(VER) + "_TIMEAGG_" + str(i) + ".pkl" sub_fname = "full_data_t1_TIME_Agg_VER_" + str(VER) + "_TIMEAGG_" + str(i) train_cols, pred_val = run_model(dt, modelname=modelname, sub_fname=sub_fname, END_TRAIN=max(dt['d']), P_HORIZON=28, lgb_params=lgb_params, VER=VER, TIME_FILTER=0, logit_model=False, use_fake_val=False, val_prop=0.2, wrmssee_flag=True) key_level_sub = save_fin_sub(modelname, sub_fname, train_cols, VER=VER, lvl=0, PRED_DURATION=28, fday=max(dt['d']), save_sub=True, stype=s) sub_fin_dict[s] = key_level_sub get_level_sum.append(get_sum(sub_fin_dict[s], PRED_DURATION=28, inc_eval=False)) del dt loss, ploss, val_sum = get_pred_loss(evaluator, val_id, sub_fin_dict) loss_summary.append([n, loss, ploss, val_sum]) # - loss_summary # # Re-build with re-training # + loss_summary =[] START_TRAIN = 150 # 150 1183 # We can skip some rows (Nans/faster training) END_TRAIN = 1913 # End day of our train set # LGB Parameter lgb_params = {'boosting_type':'gbdt', # 'dart', 'objective': 'poisson', # 'tweedie_variance_power': 1.52, 'learning_rate': 0.04, 'force_row_wise': True, 'metric': 'rmse', 'n_estimators': 4000, 'num_leaves': 2**14-1, 'min_child_samples': 2**15-1, 'subsample': 0.7, 'subsample_freq': 1, 'colsample_bytree': 0.6, #0.6, 'lambda_l2': 0.9, 'nthread' : 12} # Run at catagory level # STYPE = ['Intermittent', 'Lumpy', 'Erratic', 'Smooth'] evaluator, valid_col, val_id = gen_eval_obj(validation_flag=False, tr_start_data=1, tr_end_date=END_TRAIN) sub_fin_dict={} get_level_sum = [] STYPE = [None] for i, s in enumerate(STYPE): print("Processing for type - ", s) dt, train_cols, train_mask,cat_feats = create_ds(START_TRAIN=START_TRAIN, END_TRAIN=END_TRAIN, stype=s) print("Number of records - ", dt.shape) create_dynamic_fea(dt, lags=[1, 7, 14], wins=[1, 7, 14]) modelname = "estimator_t1_TIME_FULL_DS_VER_"+ str(VER) + "_TIMEAGG_" + str(i) + ".pkl" sub_fname = "full_data_t1_TIME_Agg_VER_" + str(VER) + "_TIMEAGG_" + str(i) train_cols, pred_val = run_model(dt, modelname=modelname, sub_fname=sub_fname, END_TRAIN=max(dt['d']), P_HORIZON=28, lgb_params=lgb_params, VER=VER, TIME_FILTER=0, logit_model=False, use_fake_val=False, val_prop=0.2, wrmssee_flag=False) key_level_sub_val = save_fin_sub(modelname, sub_fname, train_cols, VER=VER, lvl=0, PRED_DURATION=28, fday=max(dt['d']), save_sub=True, stype=s) # load full trained model estimator = pickle.load(open(os.path.join(aux_model_path , modelname), 'rb')) modelname1 = "full_estimator_t1_TIME_FULL_DS_VER_"+ str(VER) + "_TIMEAGG_" + str(i) + ".pkl" train_cols, val_data_reg1 = run_model(dt, modelname=modelname1, sub_fname=sub_fname, END_TRAIN=max(dt['d']), P_HORIZON=28, lgb_params=lgb_params, VER=VER, TIME_FILTER=0, logit_model=False, estimator=estimator, use_fake_val=False, val_prop=0.2, wrmssee_flag=False) # Replace modelname sub_fname = "full_data_t1_TIME_Agg_VER_" + str(VER) + "_TIMEAGG_" + str(i) key_level_sub_full = save_fin_sub(modelname1, sub_fname, train_cols, VER=VER, lvl=0, PRED_DURATION=28, fday=max(dt['d']), save_sub=True, stype=s) sub_fin_dict[s] = key_level_sub_val get_level_sum.append(get_sum(sub_fin_dict[s], PRED_DURATION=28, inc_eval=False)) del dt loss, ploss, val_sum = get_pred_loss(evaluator, val_id, sub_fin_dict) loss_summary.append([n, loss, ploss, val_sum]) # - valid_col key_level_sub_full.to_csv(os.path.join(sub_path, "Point_forecast_full_ds_pre_validation_v23pks.csv"), index=False) key_level_sub_val.to_csv(os.path.join(sub_path, "Point_forecast_val_ds_pre_validation_v23pks.csv"), index=False)
11,493
/fig2_fbm_myelo_lympho_progen/suppfig4c_FBM_bcell_preb_neg_pos_degs_SW.ipynb
70176a432bbd2286447abd23227d3d74386c8ec4
[ "MIT" ]
permissive
haniffalab/FCA_bone_marrow
https://github.com/haniffalab/FCA_bone_marrow
10
5
null
2021-09-20T07:59:01
2021-09-15T11:11:29
Jupyter Notebook
Jupyter Notebook
false
false
.py
1,940,379
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Import packages and data import numpy as np import pandas as pd import scanpy as sc import seaborn as sns import scipy.stats import anndata import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib.axes._axes import _log as matplotlib_axes_logger from scipy import sparse matplotlib_axes_logger.setLevel('ERROR') import warnings warnings.filterwarnings('ignore') pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) sc.settings.verbosity = 1 # verbosity: errors (0), warnings (1), info (2), hints (3) # Set up the plot config for viewing the annotation clearly. sc.settings.set_figure_params(dpi=120, dpi_save=1000) sc.logging.print_versions() # # Load FBM B lineage adata = sc.read("/Users/b8058304/Documents/PhD_work/Coding/manuscript_figs_mk2/data/fig3a_bcell_dr_20210115.h5ad") adata cell_numbers = adata.obs.groupby(["cell.labels"]).apply(len) cell_numbers # # Assign cells as either "cycling" or "not cycling" based on whether their cell cycle score cutoff is above or below the mean adata.obs["S/G2M_score_combined"].mean() # + a = pd.DataFrame(adata.obs["S/G2M_score_combined"]) adata.obs["bcell_state"] = pd.cut(a["S/G2M_score_combined"], [-1, 0.127, np.inf], labels=['not_cycling','cycling']) # - cell_numbers = adata.obs.groupby(["bcell_state"]).apply(len) cell_numbers adata.obs["cell.labels_state"] = adata.obs["cell.labels"].astype(str) + '_' + adata.obs["bcell_state"].astype(str) sc.pl.draw_graph(adata, color=['S/G2M_score_combined', 'bcell_state', 'cell.labels_state'], layout='fa') # # Run DEGs between the negatively and positively selected Pre-B # run degs on normalised, logged and scaled data (saved as adata.X). # calculate degs using wilcoxon rank sum test with benjamini-hochberg correction. #based on ln transformed count data sc.tl.rank_genes_groups(adata, groupby='cell.labels_state', method='wilcoxon', use_raw=False, log_transformed=True, groups = ['pre B progenitor_cycling', 'pre B progenitor_not_cycling']) # filter the degs for those which are expressed in at least 25% of cells in cluster. log2fc of the ln-transformed # data will be shown. sc.tl.filter_rank_genes_groups(adata, min_in_group_fraction=0.25, min_fold_change = 0.25, use_raw=False) # + # save df for unfiltered degs result = adata.uns['rank_genes_groups'] groups = result['names'].dtype.names degs_by_cluster = pd.DataFrame({group + '_' + key[:7]: result[key][group] for group in groups for key in ['names', 'logfoldchanges', 'pvals', 'pvals_adj']}) # the degs_by_cluster df shows the log2 fold change for each gene ordered by z-score underlying the computation # of a p-value for each gene for each group degs_by_cluster.to_csv("/Users/b8058304/Documents/PhD_work/Coding/manuscript_figs_mk2/figs/clustering_degs/preb_state_20210115.csv") degs_by_cluster[:10] # - # save df for filtered degs result = adata.uns['rank_genes_groups_filtered'] groups = result['names'].dtype.names degs_by_cluster_filtered = pd.DataFrame({group + '_' + key[:7]: result[key][group] for group in groups for key in ['names', 'logfoldchanges', 'pvals', 'pvals_adj']}) # the degs_by_cluster df shows the log2 fold change for each gene ordered by z-score underlying the computation # of a p-value for each gene for each group degs_by_cluster_filtered.to_csv("/Users/b8058304/Documents/PhD_work/Coding/manuscript_figs_mk2/figs/clustering_degs/preb_state_filtered_20210115.csv") degs_by_cluster_filtered[:10] # # Remove cycling genes and run DEGs between the negatively and positively selected Pre-B # + #Score cell cycle and visualize the effect: # load file in cc_genes_file = '/Users/b8058304/Documents/PhD_work/Coding/bm/resources_for_pipelines/cell_cycle_makosco.csv' cc_genes = pd.read_csv(cc_genes_file, delimiter=',') # removing na from s and g2m list s_genes = cc_genes['S'].dropna().tolist() g2m_genes = cc_genes['G2/M'].dropna().tolist() g1_s_genes = cc_genes['G1/S'].dropna().tolist() m_genes = cc_genes['M'].dropna().tolist() m_g1_genes = cc_genes['M/G1'].dropna().tolist() total_cc_genes_list = s_genes + g2m_genes + g1_s_genes + m_genes + m_g1_genes y_genes = total_cc_genes_list no_trail = [] for x in y_genes: y = x.strip() no_trail.append(y) total_cc_genes_list = no_trail # - adata.shape non_cycle_genes_list = [name for name in adata.var_names if not name in total_cc_genes_list] adata_no_cycle = adata[:, non_cycle_genes_list] adata_no_cycle.shape # run degs on normalised, logged and scaled data (saved as adata_no_cycle.X). # calculate degs using wilcoxon rank sum test with benjamini-hochberg correction. #based on ln transformed count data sc.tl.rank_genes_groups(adata_no_cycle, groupby='cell.labels_state', method='wilcoxon', use_raw=False, log_transformed=True, groups = ['pre B progenitor_cycling', 'pre B progenitor_not_cycling']) # filter the degs for those which are expressed in at least 25% of cells in cluster. log2fc of the ln-transformed # data will be shown. sc.tl.filter_rank_genes_groups(adata_no_cycle, min_in_group_fraction=0.25, min_fold_change = 0.25, use_raw=False) # + # save df for unfiltered degs result = adata_no_cycle.uns['rank_genes_groups'] groups = result['names'].dtype.names degs_by_cluster = pd.DataFrame({group + '_' + key[:7]: result[key][group] for group in groups for key in ['names', 'logfoldchanges', 'pvals', 'pvals_adj']}) # the degs_by_cluster df shows the log2 fold change for each gene ordered by z-score underlying the computation # of a p-value for each gene for each group degs_by_cluster.to_csv("/Users/b8058304/Documents/PhD_work/Coding/manuscript_figs_mk2/figs/clustering_degs/preb_state_nocycle_20210115.csv") degs_by_cluster[:10] # - # save df for filtered degs result = adata_no_cycle.uns['rank_genes_groups_filtered'] groups = result['names'].dtype.names degs_by_cluster_filtered = pd.DataFrame({group + '_' + key[:7]: result[key][group] for group in groups for key in ['names', 'logfoldchanges', 'pvals', 'pvals_adj']}) # the degs_by_cluster df shows the log2 fold change for each gene ordered by z-score underlying the computation # of a p-value for each gene for each group degs_by_cluster_filtered.to_csv("/Users/b8058304/Documents/PhD_work/Coding/manuscript_figs_mk2/figs/clustering_degs/preb_state_nocycle_filtered_20210115.csv") degs_by_cluster_filtered[:10] # # Save the data adata.write("/Users/b8058304/Documents/PhD_work/Coding/manuscript_figs_mk2/data/fig3a_bcell_dr_20210115.h5ad") # # Open the obj and run violin plots import numpy as np import pandas as pd import scanpy as sc import seaborn as sns import scipy.stats import anndata import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib.axes._axes import _log as matplotlib_axes_logger from scipy import sparse matplotlib_axes_logger.setLevel('ERROR') sc.settings.verbosity = 1 # verbosity: errors (0), warnings (1), info (2), hints (3) # Set up the plot config for viewing the annotation clearly. sc.settings.set_figure_params(dpi=120, dpi_save=1000) sc.logging.print_versions() # # Import the FBM B cells adata = sc.read("/Users/b8058304/Documents/PhD_work/Coding/manuscript_figs_mk2/data/fig3a_bcell_dr_20210115.h5ad") # # Run violin plots # + sc.settings.set_figure_params(dpi=100, dpi_save=1000) subset = adata[adata.obs['cell.labels_state'].isin(['pre B progenitor_cycling', 'pre B progenitor_not_cycling'])].copy() subset.obs["cell.labels_state"] = subset.obs["cell.labels_state"].cat.reorder_categories( ['pre B progenitor_not_cycling', 'pre B progenitor_cycling']) genes = ["IRF4", "CXCR4", "SOX4", "IGHM", "IGLC2", "IGKC", "KLF2", "MALAT1", "BCL7A", "TCL1A", "CD81", "CD38", "MKI67", "TOP2A", "CDK1", "AURKB", "CENPM", "CENPU", "CENPK", "CENPF", "CDKN3", "HMGA1", "UBE2C"] sc.pl.stacked_violin(subset, var_names=genes, save="pre_b_degs_pos_neg_20210115.pdf", rotation=90, groupby='cell.labels_state', use_raw=False, swap_axes=False, figsize=(6,2), row_palette=["#ff0000", "#ff0000", "#ff0000", "#0000ff", "#0000ff", "#0000ff"]) # -
8,445
/My_BWSI_W3/My_02TextureClassificationWithNeuralNets.ipynb
14e8d04bcd5779c2617af7150eea8edd173c21e6
[]
no_license
cjin2019/Week3_public
https://github.com/cjin2019/Week3_public
0
0
null
2018-07-23T14:06:27
2018-07-23T13:23:10
null
Jupyter Notebook
false
false
.py
30,747
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] id="PBk7hgRJ4hRE" colab_type="text" # # Tissue Classification using Neural Networks # In this lab we will explore the use of texture in images and traditional machine learning approaches such as clustering. The dataset we will be using is available here: http://dx.doi.org/10.5281/zenodo.53169. # # ![alt text](https://www.researchgate.net/profile/Jakob_Kather/publication/303998214/figure/fig7/AS:391073710002224@1470250646407/Representative-images-from-our-dataset-Here-the-first-10-images-of-every-tissue-class.png) # # The above figure shows the 8 different classes of tissue we will be trying to identify. # + id="j9HUVMyJ4hRG" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} # Imports from __future__ import print_function import os import numpy as np import matplotlib.pylab as plt from sklearn.model_selection import train_test_split from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD from tensorflow.keras.utils import to_categorical # + [markdown] id="0rWqAlD84hRN" colab_type="text" # ## Step 1 # * Load the data (done for you) # * The "data" variable stores 5000 images of shape 150x150. This means data has shape (5000, 150, 150). These images are loaded here as grayscale. # * The "labels" variable stores 5000 labels (0-7). This means "labels" has shape (5000,) # * Split data into training and testing subsets (left up to you) # * Check out the sklearn function train_test_split from sklearn.model_selection # + id="mJo-Fc6A4hRP" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 86} outputId="63694a2e-da72-4662-fcc3-3c1f99cf799d" executionInfo={"status": "ok", "timestamp": 1532443835464, "user_tz": 240, "elapsed": 23773, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} # ! git clone https://github.com/BeaverWorksMedlytics/Week3_public.git # Build the path to the data folder. No need to change directories # There are a total of 6 files you will have to load data_dir = os.path.join( os.getcwd(), 'Week3_public', 'data', 'crc') # + id="HKTScej54hRU" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 207} outputId="e00a456c-c91b-4f0e-ffda-785f765df994" executionInfo={"status": "ok", "timestamp": 1532453520750, "user_tz": 240, "elapsed": 1358, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} #### ATTEMPT 1 ##### # Load data and split into training, testing sets y = np.load(os.path.join(data_dir, 'rgb01.npz')) labels = y['labels'] data = y['rgb_data'] data = data[:,:,:,0] #takes only the r value label_str = y['label_str'] label_str = label_str.tolist() # this is to convert label_str back to a dictionary y = [] print(data.shape) for ii in range(2,6): filename = os.path.join(data_dir, 'rgb0' + str(ii) + '.npz') print('loading ', filename) y = np.load(filename) labels = np.append(labels, y['labels'], axis=0) data = np.append(data, y['rgb_data'][:,:,:,0], axis=0) print(data.shape) y = [] print( data.shape ) print( labels.shape ) # + id="nuLcq7wLOFX7" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 207} outputId="252f8c94-fe14-4d99-f8a5-0c28db94e48a" executionInfo={"status": "ok", "timestamp": 1532453630747, "user_tz": 240, "elapsed": 1815, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} #### ATTEMPT 2 ##### # Load data and split into training, testing sets y = np.load(os.path.join(data_dir, 'rgb01.npz')) labels = y['labels'] data = y['rgb_data'] data = data[:,:,:,0:3] #takes only the r value label_str = y['label_str'] label_str = label_str.tolist() # this is to convert label_str back to a dictionary y = [] print(data.shape) for ii in range(2,6): filename = os.path.join(data_dir, 'rgb0' + str(ii) + '.npz') print('loading ', filename) y = np.load(filename) labels = np.append(labels, y['labels'], axis=0) data = np.append(data, y['rgb_data'][:,:,:,0:3], axis=0) print(data.shape) y = [] print( data.shape ) print( labels.shape ) # + id="k55a7z6L4hRW" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 69} outputId="17923ee1-c796-4740-b26b-3222766c5fe3" executionInfo={"status": "ok", "timestamp": 1532453526136, "user_tz": 240, "elapsed": 891, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} ##### ATTEMPT 1 ###### num_images, nrows, ncols = data.shape # split into training and testing sets X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size = 0.25) print("Current separation:", X_train.shape, y_train.shape) # convert the labels from 1-D arrays to categorical type print("Previous Label:", y_train[0]) y_train = to_categorical(y_train) y_test = to_categorical(y_test) print("New Label Shape:", y_train.shape) # + id="u2bqxjWZU3RM" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 69} outputId="7f9df1a6-7de0-4932-da97-af963487a8b4" executionInfo={"status": "ok", "timestamp": 1532453635395, "user_tz": 240, "elapsed": 1308, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} ##### ATTEMPT 2 ###### num_images, nrows, ncols, nrgb = data.shape # split into training and testing sets X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size = 0.25) print("Current separation:", X_train.shape, y_train.shape) # convert the labels from 1-D arrays to categorical type print("Previous Label:", y_train[0]) y_train = to_categorical(y_train) y_test = to_categorical(y_test) print("New Label Shape:", y_train.shape) # + [markdown] id="955WO4HE4hRZ" colab_type="text" # ## Normalize and Reshape Data # All images should be normalized to the range 0-1 by dividing by 255. # # Additionally, because this is a ANN, not a CNN, we need to reshape the data to be one dimensional. In training and test data, colapse the row and column dimensions into one dimension using reshape(). # #### Note # * Using the La\*b colorspace : If you convert your images to the La\*b colorspace, the scaling factor will change. Each channel in this colorspace will have a different range and normalization of each space will involve scaling each channel separately. Additionally, the a\* channel can have a negative range. This also needs to be taken into account. # * Using the HSV/HSI colorspace : Similar considerations apply if you are using the HSV/HSI colorspace. The only difference is that the HSV/HSI colorspace will have all positive values. # + id="eWkDphoh4hRa" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} # Assuming we are using the RGB colorspace # Normalize all images so that they are 0-1 # Reshape the data # + id="8TtHfWoEzfro" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} outputId="d7e26529-f787-4884-8cce-996576aad7a6" executionInfo={"status": "ok", "timestamp": 1532453532720, "user_tz": 240, "elapsed": 913, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} ##### ATTEMPT 1 ######## ### just divide by 255 ##### n_images, n_rows, n_cols = X_train.shape X_train = X_train/255 X_train = X_train.reshape((n_images, n_rows*n_cols)) print('After reshape:', X_train.shape) # + id="__YwEaHI4YNc" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 138} outputId="45b404f7-2fd7-469a-89e9-55ca69040250" executionInfo={"status": "ok", "timestamp": 1532453536474, "user_tz": 240, "elapsed": 1134, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} #### ATTEMPT 1 ###### #normalize the test data n_images, n_rows, n_cols = X_test.shape X_test = X_test/255 X_test = X_test.reshape((n_images, n_rows * n_cols)) print('Current y_test', y_test) # + id="ch41GBl2VG1p" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 155} outputId="b5392b2e-880e-45c4-b97f-3f950292f1db" executionInfo={"status": "ok", "timestamp": 1532453646767, "user_tz": 240, "elapsed": 1798, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} #### ATTEMPT 2 ##### n_images, n_rows, n_cols, nrgb = X_train.shape X_train = X_train/255 X_train = X_train.reshape((n_images, n_rows*n_cols*nrgb)) print('After reshape:', X_train.shape) #normalize the test data n_images, n_rows, n_cols, nrgb = X_test.shape X_test = X_test/255 X_test = X_test.reshape((n_images, n_rows * n_cols * nrgb)) print('Current y_test', y_test) # + id="RwxFP9ucXT6c" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} ### ATTEMPT 3 ######## # + id="sGa2Z5Ba1BS2" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 3698} outputId="beb435f9-0503-454c-903c-b24f283072b9" executionInfo={"status": "ok", "timestamp": 1532453545344, "user_tz": 240, "elapsed": 755, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} ### testing what it looks like ##### for i in range(X_train[2].shape[0]): if X_train[2][i] > 0: print("Not Zero") print('Sample Image:', X_train[2]) # + [markdown] id="zPC7bjF74hRc" colab_type="text" # ## Step 2 # At this point, the data has been split into training and testing sets and normalized. We will now design a fully connected neural network for texture classification. # # <img src="http://cs231n.github.io/assets/nn1/neural_net2.jpeg" width="50%"></img> # # ( Image from http://cs231n.github.io/convolutional-networks/ ) # # When designing a fully connected network for classification, we have several decisions to make. # # **Network Architecuture** # * How many layers will our network have ? # * How many neurons per layer ? # * What is an appropriate batch size, learning rate and number of training epochs ? # # **Data input** # * Do we use the raw data ? # * RGB or just gray channel ? # * Does the use of different colorspaces lead to better results for a given network architecture ? # * Can we use any of the texture features from the previous lab as inputs to this model ? # * How does data augmentation affect the results ? # # Other considerations, we will not be exploring : # * What is the trade-off between input data sizes and batch size ? # * Is the GPU always the appropriate platform for training ? # * How does hardware influence inputs and batch sizes for a given desired accuracy ? # + id="rm8iHsFF4hRd" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} # Define the data shapes based on your decision to use rgb or grayscale or other colorpsaces or texture features or # some combination of these inputs num_classes = 8 input_shape = nrows*ncols # + id="kQDQztQ1V16a" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} #### ATTEMPT 2 #### num_classes = 8 input_shape = nrows*ncols*nrgb # + [markdown] id="q4Q3AADK4hRg" colab_type="text" # ## Step 3 # Design your network here using Keras # + id="A50F4vXr2glS" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} import tensorflow as tf # + id="18Mpluyc4hRg" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 242} outputId="a63530c3-6ebd-4488-9aff-70eeb9b8b263" executionInfo={"status": "ok", "timestamp": 1532453555478, "user_tz": 240, "elapsed": 720, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} #### ATTEMPT 1 ##### # Create your network model = [] model = Sequential() # Add input layer model.add(Dense(64, activation=tf.nn.relu, input_shape = (nrows * ncols, ))) # Add fully connected layers model.add(Dense(64, activation=tf.nn.relu)) # See Dense : https://keras.io/layers/core/#dense # Add final output layer - This should have as many neurons as the number # of classes we are trying to identify model.add(tf.keras.layers.Dense(num_classes, activation=tf.nn.softmax)) model.summary() # + id="8B-CHWFOVq33" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 242} outputId="4caff2ea-8ad6-4f49-b0f0-1f0be789af8c" executionInfo={"status": "ok", "timestamp": 1532454654983, "user_tz": 240, "elapsed": 727, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} ### ATTEMPT 2 ###### # Create your network model = [] model = Sequential() # Add input layer model.add(Dense(32, activation=tf.nn.relu, input_shape = (nrows * ncols * nrgb, ))) # Add fully connected layers model.add(Dense(32, activation=tf.nn.relu)) # See Dense : https://keras.io/layers/core/#dense # Add final output layer - This should have as many neurons as the number # of classes we are trying to identify model.add(tf.keras.layers.Dense(num_classes, activation=tf.nn.softmax)) model.summary() # + id="bBwowtmjW9JO" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} # + [markdown] id="Jzt0c71-4hRj" colab_type="text" # ## Step 4 # Compile the model you designed. Compiltation of the Keras model results in the initialization of model weights and sets other model properties. # + id="Utkn9Tqc4hRk" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} model.compile(loss='categorical_crossentropy', optimizer=SGD(), metrics=['accuracy']) # + [markdown] id="hMXqy9jM4hRn" colab_type="text" # ## Step 5 # Train model # + id="o0s4RNTs4hRo" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 383} outputId="ca50e440-1ec6-4b6e-a658-36746a0a41fb" executionInfo={"status": "ok", "timestamp": 1532455061064, "user_tz": 240, "elapsed": 47895, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} y = model.fit(X_train, y_train, batch_size = 20, epochs = 10) # + [markdown] id="Jd9MGDK94hRr" colab_type="text" # ## Step 6 # See how your model performs by uisng it for inference. # * What is the accuracy of classification ? # * Change your model, re-compile and test. Can you improve the accuracy of the model ? # # + id="KyAeLkGo4hRt" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} # predict labels - use the test set for prediction pred_labels = model.predict(X_test) # + id="TJEhLJKx4hRv" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 173} outputId="573a1b66-16fb-4ad0-e87d-085a19c671e8" executionInfo={"status": "ok", "timestamp": 1532455101397, "user_tz": 240, "elapsed": 643, "user": {"displayName": "Caroline J", "photoUrl": "//lh5.googleusercontent.com/-UW6T_Czu5Tc/AAAAAAAAAAI/AAAAAAAAAmk/_PdmajiiHBY/s50-c-k-no/photo.jpg", "userId": "115113466279912359883"}} from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix # We need to convert the categorical array test_labels and pred_labels into a vector # in order to use it in the calculation of the confusion matrix (i.e. convert from one-hot to integers) mat = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(pred_labels, axis = 1)) acc = accuracy_score(np.argmax(y_test, axis=1), np.argmax(pred_labels, axis=1)) print(acc) print(mat) # + id="Em2WIdYb4hRx" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}} plt.figure(figsize=(8,6)) plt.imshow(mat, cmap='hot', interpolation='nearest') plt.grid(False) plt.colorbar() plt.xlabel('true label') plt.ylabel('predicted label') plt.show() # + [markdown] id="GTaVC3IX4hR1" colab_type="text" # ## Assignment # * In Step 3 design your own network # * Does the model perform better if you use all three RGB channels ? # * How does the performance change when using the La*b colorspace ? # # + id="GAPOBem8kKPG" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 202} outputId="4ac22c52-d289-4413-93b3-c5e3f2c715b0" executionInfo={"elapsed": 19851, "status": "ok", "timestamp": 1532357624409, "user": {"displayName": "Thomas Possidente", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "106075129304574823577"}, "user_tz": 240} # Load data as RGB y = np.load(os.path.join(data_dir, 'rgb01.npz')) labels = y['labels'] data_rgb = y['rgb_data'] label_str = y['label_str'] label_str = label_str.tolist() # this is to convert label_str back to a dictionary y = [] print(data_rgb.shape) for ii in range(2,6): filename = os.path.join(data_dir, 'rgb0' + str(ii) + '.npz') print('loading ', filename) y = np.load(filename) labels = np.append(labels, y['labels'], axis=0) data_rgb = np.append(data_rgb, y['rgb_data']) print(data_rgb.shape) y = [] data_rgb = data_rgb.astype('float') data_rgb = data_rgb.reshape(5000, 150, 150, 3) print( data_rgb.shape ) print( labels.shape ) num_images, nrows, ncols, dims = data_rgb.shape # + id="7kI1rNVCoWMt" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
19,071
/module3/assignment_applied_modeling_3.ipynb
d254c1dc7c21df9182e1a4f61b7fe5eb9627065b
[ "MIT" ]
permissive
adphelps/DS-Unit-2-Applied-Modeling
https://github.com/adphelps/DS-Unit-2-Applied-Modeling
0
0
MIT
2019-10-03T02:29:17
2019-09-19T15:49:32
null
Jupyter Notebook
false
false
.py
2,527,191
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="OefM9qbf2ya1" # !pip uninstall -y kaggle # !pip install kaggle==1.5.6 # %env KAGGLE_USERNAME = bassamsabber # %env KAGGLE_KEY = 717678f8ebfb53d84e9c47444c275871 # !kaggle competitions download -c computationalintelligencesc2020 # !unzip DataSet # !unzip computationalintelligencesc2020 # + colab={"base_uri": "https://localhost:8080/"} id="u8fP8z2E4PFt" outputId="8da24ef5-1110-47cf-a956-a1041fa8f8a5" from google.colab import drive drive.mount('/content/drive') # + colab={"base_uri": "https://localhost:8080/"} id="jcYnVl7j4J_W" outputId="e83a7c50-38ae-48e0-8bb2-c4c7a95d73ba" import numpy as np import cv2 from tqdm import tqdm import os import pandas as pd from os.path import join from random import shuffle from keras import layers from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalAveragePooling2D from keras.initializers import glorot_uniform from keras.utils import plot_model from IPython.display import SVG from keras.utils.vis_utils import model_to_dot import keras.backend as K import tensorflow as tf from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D from keras.layers.normalization import BatchNormalization from keras import optimizers, layers from keras.models import Model, load_model from keras.callbacks import ModelCheckpoint, EarlyStopping from keras.layers import Input, Conv2D, SeparableConv2D, Add, Dense, BatchNormalization, ReLU, MaxPool2D, GlobalAvgPool2D, Concatenate, Average,Maximum from keras.utils.data_utils import get_file print("Done") # + id="cNt7-XX345az" #seed import numpy as np import tensorflow as tf import random as rn import os os.environ['PYTHONHASHSEED'] = '0' np.random.seed(42) rn.seed(12345) session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) from keras import backend as K tf.compat.v1.set_random_seed(1234) sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf) tf.compat.v1.keras.backend.set_session(sess) # + colab={"base_uri": "https://localhost:8080/"} id="97bO930r53I5" outputId="ecad49d8-7838-4892-a4c6-dbfcd906b791" f_1 ="/content/Scenes training set/Scenes training set/buildings" f_2 = "/content/Scenes training set/Scenes training set/forest" f_3 = "/content/Scenes training set/Scenes training set/glacier" f_4 = "/content/Scenes training set/Scenes training set/mountain" f_5 = "/content/Scenes training set/Scenes training set/sea" f_6 = "/content/Scenes training set/Scenes training set/street" #Detection often requires fine-grained visual information ####so we increase the input resolution of the network from 150x150 to 224x224 IMG_SIZE = 224 def create_train_data(): training_data = [] for img in tqdm(os.listdir(f_1)): path = os.path.join(f_1, img) img = cv2.imread(path) img = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) #out = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) training_data.append([np.array(img), np.array([1,0,0,0,0,0])]) for img in tqdm(os.listdir(f_2)): path = os.path.join(f_2, img) img = cv2.imread(path) img = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) #out = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) training_data.append([np.array(img), np.array([0,1,0,0,0,0])]) for img in tqdm(os.listdir(f_3)): path = os.path.join(f_3, img) img = cv2.imread(path) img = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) #out = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) training_data.append([np.array(img), np.array([0,0,1,0,0,0])]) for img in tqdm(os.listdir(f_4)): path = os.path.join(f_4, img) img = cv2.imread(path) img = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) #out = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) training_data.append([np.array(img), np.array([0,0,0,1,0,0])]) for img in tqdm(os.listdir(f_5)): path = os.path.join(f_5, img) img = cv2.imread(path) img = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) #out = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) training_data.append([np.array(img), np.array([0,0,0,0,1,0])]) for img in tqdm(os.listdir(f_6)): path = os.path.join(f_6, img) img = cv2.imread(path) img = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) #out = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) training_data.append([np.array(img), np.array([0,0,0,0,0,1])]) shuffle(training_data) #train_data=np.load('/content/drive/My Drive/data/train_data.npy', allow_pickle=True) np.save('train.npy', training_data) return training_data ####train_dataa = create_train_data() training_data = create_train_data() # + colab={"base_uri": "https://localhost:8080/"} id="Iudt22t_53QJ" outputId="acf00d88-8d02-4356-e8a7-fc7cfc965abc" TEST_DIR = "/content/Scenes testing test/Scenes testing test" def Create_test_data(): testing_data = [] for img in tqdm(os.listdir(TEST_DIR)): name=img path = os.path.join(TEST_DIR, img) img = cv2.imread(path) img = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) testing_data.append([np.array(img),name]) np.save('test.npy', testing_data) return testing_data test_data = Create_test_data() # + id="dytg3y5U53Se" train_data=np.load("/content/drive/MyDrive/natural scense/train.npy", allow_pickle=True) test_data=np.load("/content/drive/MyDrive/natural scense/test.npy", allow_pickle=True) # + id="gm3licqn53U5" colab={"base_uri": "https://localhost:8080/"} outputId="a85c53fb-e698-4dda-d06a-a31f31bae34b" #Load weights of VGG19 NO TOP ON IMAGENET ##To make transfer learning ## #### WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5' weights_path = get_file('vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5',WEIGHTS_PATH_NO_TOP) # + id="57k2_bn1Yxps" #padding: one of "valid" or "same" (case-insensitive). "valid" means no padding. ###"same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. # + id="mWZuV_h553Z8" def VGG16(include_top=False,weights='imagenet', classes=6): input = Input(shape=(224, 224, 3)) x = Conv2D(filters=64, kernel_size=3, padding='same', activation='relu')(input) x = Conv2D(filters=64, kernel_size=3, padding='same', activation='relu')(x) x = MaxPool2D(pool_size=2, strides=2, padding='same')(x) x = Conv2D(filters=128, kernel_size=3, padding='same', activation='relu')(x) x = Conv2D(filters=128, kernel_size=3, padding='same', activation='relu')(x) x = MaxPool2D(pool_size=2, strides=2, padding='same')(x) x = Conv2D(filters=256, kernel_size=3, padding='same', activation='relu')(x) x = Conv2D(filters=256, kernel_size=3, padding='same', activation='relu')(x) x = Conv2D(filters=256, kernel_size=3, padding='same', activation='relu')(x) x = MaxPool2D(pool_size=2, strides=2, padding='same')(x) x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x) x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x) x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x) x = MaxPool2D(pool_size=2, strides=2, padding='same')(x) x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x) x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x) x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x) x = MaxPool2D(pool_size=2, strides=2, padding='same')(x) # x = Flatten()(x) # x = Dense(units=4096, activation='relu')(x) # # x = Dropout(0.2)(x) # x = Dense(units=4096, activation='relu')(x) # # x = Dropout(0.2)(x) # output = Dense(units=2, activation='softmax')(x) model = Model(inputs=input, outputs=x) model.load_weights(weights_path) return model model = VGG16(include_top=False, weights='imagenet', classes=6) new_model = model.output #model.trainable = False #Pooling layers:GlobalAveragePooling2D::Global average pooling operation for spatial data. global_layer = GlobalAveragePooling2D() new_model = global_layer(new_model) new_model = Dense(3072, activation="relu")(new_model) new_model = Dense(6, activation="softmax")(new_model) # + id="G7C6YHOlLsDY" vgg_model = Model(inputs=model.input, outputs=new_model) # + colab={"base_uri": "https://localhost:8080/"} id="3oGzB6yJLsEv" outputId="4f4d4fc0-739d-4ef7-9760-f22b6bda7be9" vgg_model.summary() # + [markdown] id="U2VDwzdDeBT1" # # + colab={"base_uri": "https://localhost:8080/"} id="kOn95b3tLsGw" outputId="4932b6df-70da-4cd0-d709-19e1013ce751" IMG_SIZE = 224 #momentum: float hyperparameter >= 0 that accelerates gradient descent in the relevant direction and dampens oscillations. #Defaults to 0, i.e., vanilla gradient descent. #nesterov: boolean. Whether to apply Nesterov momentum. Defaults to False. optimization = optimizers.SGD(lr=.0001,decay=1e-6 ,momentum=0.9,nesterov=True,name="SGD") vgg_model.compile(optimizer=optimization, loss='categorical_crossentropy', metrics=['accuracy']) x_train = np.array([i[0] for i in train_data]).reshape(-1, IMG_SIZE, IMG_SIZE, 3) y_train = np.asarray([i[1] for i in train_data]) vgg_model.fit(x_train, y_train, batch_size=8, epochs=10, verbose=1) vgg_model.save("vgg16.model") # + colab={"base_uri": "https://localhost:8080/"} id="jxVajWjeWlJP" outputId="738163c2-b418-4564-b107-dba9ac6f4b60" Submit=[] for num, data in enumerate(test_data[:]): img_label = data[0] name = data[1] data = img_label.reshape(-1,224, 224, 3) model_out = vgg_model.predict([data])[0] if np.argmax(model_out) == 0: Submit.append([name,0]) elif np.argmax(model_out) == 1: Submit.append([name,1]) elif np.argmax(model_out) == 2: Submit.append([name,2]) elif np.argmax(model_out) == 3: Submit.append([name,3]) elif np.argmax(model_out) == 4: Submit.append([name,4]) elif np.argmax(model_out) == 5: Submit.append([name,5]) print(Submit) Submied=pd.DataFrame(Submit,columns=['Image','Label']) Submied.to_csv("/content/submit.csv", index = False) # + id="Ct3S-MwfMiI7" # + id="3nGcPYnn3R36" #Ensemble is done because an ensemble tends to perform better than singles improving the generalization ability. #CONCATENATION ENSEMBLE #The concatenation is the most common technique to merge different data sources. # A concatenation ensemble receives different inputs, whatever their dimensions are, and concatenates them on a given axis. # A drawback, of attaching information side by side, can be the dimensionality explosion. # + id="besQaW-3oXUg" model1 = tf.keras.models.load_model("/content/drive/MyDrive/natural scense/vgg19(true).model") #freezing all the weights:before compiling the model model1.trainable = False for i, layer in enumerate(model1.layers): #Change layers names ! layer._name = 'm1_layer_' + str(i) # + id="MOHg1r_HBHAi" model2 = tf.keras.models.load_model("/content/drive/MyDrive/natural scense/vgg16.model") model2.trainable = False for i, layer in enumerate(model2.layers): layer._name = 'm2_layer_' + str(i) # + id="KS0PooV8BHGh" merge = Concatenate()([model1.output,model2.output]) merge = tf.keras.layers.Dense(3072, activation='relu')(merge) output = tf.keras.layers.Dense(6, activation='softmax')(merge) Final_model = tf.keras.models.Model(inputs=[model1.input,model2.input], outputs=output) # + id="JK0567-mBHO_" IMG_SIZE = 224 optimization = optimizers.SGD(lr=.001,decay=1e-6 ,momentum=0.9,nesterov=True,name="SGD") Final_model.compile(optimizer=optimization, loss='categorical_crossentropy', metrics=['accuracy']) x_train1= np.array([i[0] for i in train_data]).reshape(-1, IMG_SIZE, IMG_SIZE, 3) x_train2= np.array([i[0] for i in train_data]).reshape(-1, IMG_SIZE, IMG_SIZE, 3) y_train = np.asarray([i[1] for i in train_data]) #By setting verbose 0, 1 or 2 you just say how do you want to 'see' the training progress for each epoch. Final_model.fit([x_train1,x_train2], y_train, batch_size=8, epochs=10, verbose=1) Final_model.save("ensemble.model") # + colab={"base_uri": "https://localhost:8080/"} id="12B_gX8WFTZS" outputId="8a7df1c2-8ac5-472f-b022-7e49dfdce10c" Submit=[] for num, data in enumerate(test_data[:]): img_label = data[0] name=data[1] data1 = img_label.reshape(-1,224, 224, 3) data2 = img_label.reshape(-1,224, 224, 3) model_out = Final_model.predict([data1,data2])[0] if np.argmax(model_out) == 0: Submit.append([name,0]) elif np.argmax(model_out) == 1: Submit.append([name,1]) elif np.argmax(model_out) == 2: Submit.append([name,2]) elif np.argmax(model_out) == 3: Submit.append([name,3]) elif np.argmax(model_out) == 4: Submit.append([name,4]) elif np.argmax(model_out) == 5: Submit.append([name,5]) print(Submit) Submited=pd.DataFrame(Submit,columns=['Image','Label']) Submited.to_csv("/content/submit.csv", index = False)
13,740
/CLT-and-Confidence-Intervals.ipynb
c44f8ecfca09c3f64217c3dc2e953d2c77c9c314
[]
no_license
andrewwhitman/DSLE-083021-CLT-Confidence-Intervals
https://github.com/andrewwhitman/DSLE-083021-CLT-Confidence-Intervals
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
15,884
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="xVg-lho6_ezp" colab_type="code" outputId="4d12cab7-2385-4386-fd9d-107ca697bc9f" colab={"base_uri": "https://localhost:8080/", "height": 1000} # !pip install zeugma # + id="Wq9KjbkYANQu" colab_type="code" colab={} import pandas as pd import numpy as np import nltk from nltk.corpus import stopwords from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from zeugma.embeddings import EmbeddingTransformer # + id="pr9xqX_kAoUa" colab_type="code" outputId="894b1d56-43d5-4a92-ac4f-6ad55e1e58d8" colab={"base_uri": "https://localhost:8080/", "height": 85} nltk.download('stopwords') nltk.download('punkt') def preprocess(data): ''' Credit goes to https://www.kaggle.com/gpreda/jigsaw-fast-compact-solution ''' punct = "/-'?!.,#$%\'()*+-/:;<=>@[\\]^_`{|}~`" + '""“”’' + '∞θ÷α•à−β∅³π‘₹´°\ £€\×™√²—–&' def clean_special_chars(text, punct): for p in punct: text = text.replace(p, ' ') return text data = clean_special_chars(str(data), punct) data = data.split() stop_words = set(stopwords.words('english')) cleaned = [word for word in data if word not in stop_words] return " ".join(cleaned) # + id="NvwqWCFeBFzM" colab_type="code" colab={} train_file = "/content/drive/My Drive/MachineLearning/toxic/train.csv" df = pd.read_csv(train_file) # + id="CF6HxUxHBmRi" colab_type="code" colab={} #Clean data df['clean_text'] = df['comment_text'].apply(lambda x: preprocess(x)) # + id="jjvu0EcMBJfp" colab_type="code" colab={} #Split into training and test data train, test = train_test_split(df, test_size=0.2) x_train = train['clean_text'] x_test = test['clean_text'] y_train = np.where(train['target'] >= 0.5, 1, 0) y_test = np.where(test['target'] >= 0.5, 1, 0) #y_cat_train = train[['target', 'severe_toxicity', 'obscene', 'identity_attack', 'insult', 'threat']] #y_cat_test = test[['target', 'severe_toxicity', 'obscene', 'identity_attack', 'insult', 'threat']] # + id="SXepTNFyRwnw" colab_type="code" outputId="c3182c30-5bc5-4bd6-deee-010f915c95e4" colab={"base_uri": "https://localhost:8080/", "height": 204} y_cat_train.head() # + id="1kutgCjg_iEi" colab_type="code" outputId="46af5d1f-5410-4152-fd92-9a16ac821727" colab={"base_uri": "https://localhost:8080/", "height": 105} #Encode training data with glove vectors glove = EmbeddingTransformer('glove') x_train = glove.transform(x_train) # + id="JalLk6buDP_E" colab_type="code" outputId="ca5a63da-1d7c-4c95-df14-bb0449f3279d" colab={"base_uri": "https://localhost:8080/", "height": 85} #Fit LR model target_model = LogisticRegression(C=5, random_state=42, solver='sag', max_iter=1000, n_jobs=-1) target_model.fit(x_train, y_train) # + id="-jwqxK6IEvsY" colab_type="code" colab={} #Validate model on test data x_test_glove = glove.transform(x_test) predictions = target_model.predict_proba(x_test_glove)[:,1] # + id="4YGdlJNEtA8h" colab_type="code" colab={} submission = pd.DataFrame.from_dict({ 'id': test['id'], 'prediction': predictions }) # + id="uT1DYHjhtPr5" colab_type="code" outputId="a7202629-559e-4a31-9b8a-25cbe3b38d4f" colab={"base_uri": "https://localhost:8080/", "height": 419} submission # + id="F8WIJGqdrOEu" colab_type="code" colab={} # From baseline kernel from sklearn import metrics def calculate_overall_auc(df, model_name): true_labels = df[TOXICITY_COLUMN]>0.5 predicted_labels = df[model_name] return metrics.roc_auc_score(true_labels, predicted_labels) def power_mean(series, p): total = sum(np.power(series, p)) return np.power(total / len(series), 1 / p) def get_final_metric(bias_df, overall_auc, POWER=-5, OVERALL_MODEL_WEIGHT=0.25): bias_score = np.average([ power_mean(bias_df[SUBGROUP_AUC], POWER), power_mean(bias_df[BPSN_AUC], POWER), power_mean(bias_df[BNSP_AUC], POWER) ]) return (OVERALL_MODEL_WEIGHT * overall_auc) + ((1 - OVERALL_MODEL_WEIGHT) * bias_score) SUBGROUP_AUC = 'subgroup_auc' BPSN_AUC = 'bpsn_auc' # stands for background positive, subgroup negative BNSP_AUC = 'bnsp_auc' # stands for background negative, subgroup positive def compute_auc(y_true, y_pred): try: return metrics.roc_auc_score(y_true, y_pred) except ValueError: return np.nan def compute_subgroup_auc(df, subgroup, label, model_name): subgroup_examples = df[df[subgroup]>0.5] return compute_auc((subgroup_examples[label]>0.5), subgroup_examples[model_name]) def compute_bpsn_auc(df, subgroup, label, model_name): """Computes the AUC of the within-subgroup negative examples and the background positive examples.""" subgroup_negative_examples = df[(df[subgroup]>0.5) & (df[label]<=0.5)] non_subgroup_positive_examples = df[(df[subgroup]<=0.5) & (df[label]>0.5)] examples = subgroup_negative_examples.append(non_subgroup_positive_examples) return compute_auc(examples[label]>0.5, examples[model_name]) def compute_bnsp_auc(df, subgroup, label, model_name): """Computes the AUC of the within-subgroup positive examples and the background negative examples.""" subgroup_positive_examples = df[(df[subgroup]>0.5) & (df[label]>0.5)] non_subgroup_negative_examples = df[(df[subgroup]<=0.5) & (df[label]<=0.5)] examples = subgroup_positive_examples.append(non_subgroup_negative_examples) return compute_auc(examples[label]>0.5, examples[model_name]) def compute_bias_metrics_for_model(dataset, subgroups, model, label_col, include_asegs=False): """Computes per-subgroup metrics for all subgroups and one model.""" records = [] for subgroup in subgroups: record = { 'subgroup': subgroup, 'subgroup_size': len(dataset[dataset[subgroup]>0.5]) } record[SUBGROUP_AUC] = compute_subgroup_auc(dataset, subgroup, label_col, model) record[BPSN_AUC] = compute_bpsn_auc(dataset, subgroup, label_col, model) record[BNSP_AUC] = compute_bnsp_auc(dataset, subgroup, label_col, model) records.append(record) return pd.DataFrame(records).sort_values('subgroup_auc', ascending=True) # + id="9l9SLFFMrQ2C" colab_type="code" outputId="748b03ef-2d93-45e8-ffc7-c0d05ac431cd" colab={"base_uri": "https://localhost:8080/", "height": 156} identity_columns = [ 'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish', 'muslim', 'black', 'white', 'psychiatric_or_mental_illness'] MODEL_NAME = 'model1' test[MODEL_NAME]= submission["prediction"] TOXICITY_COLUMN = 'target' bias_metrics_df = compute_bias_metrics_for_model(test, identity_columns, MODEL_NAME, 'target') bias_metrics_df get_final_metric(bias_metrics_df, calculate_overall_auc(test, MODEL_NAME))
7,144
/materials/st_ipython.ipynb
07a8259714784bb0c6d02dc706f32992f8f4a987
[ "MIT" ]
permissive
edmarruano/python4geosciences
https://github.com/edmarruano/python4geosciences
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
15,757
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Coding outside of Jupyter notebooks # # We have used Jupyter notebooks in this class as a useful tool for integrating text and interactive, usable code. However, most real life coding work would not use Jupyter notebooks but would instead be accomplished by typing the code in text files and then running them via a terminal window or in iPython. Many of you have done this before, perhaps analogously in Matlab by writing your code in a `.m` file and then running it in the Matlab GUI. Writing code in a separate file allows more heavy computations as well as allowing that code to be reused more easily that when it is written in a Jupyter notebook. # # Here, we will demonstrate typing Python code in a `.py` file and then running it in an iPython window. from __future__ import print_function import numpy as np import matplotlib.pyplot as plt # %matplotlib inline x = np.linspace(0, 10) y = x**2 fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, y, 'k', lw=3) # Now let's switch to a terminal window and a text file...
1,306
/Regression/5-FacebookMetrics/.ipynb_checkpoints/FacebookMetrics-checkpoint.ipynb
f3342e2197999c3668fd4b17a4956ffdf58b6787
[]
no_license
12sandeep1994/Machine-Learning-Comp-6321
https://github.com/12sandeep1994/Machine-Learning-Comp-6321
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
216,985
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.model_selection import KFold from sklearn.preprocessing import StandardScaler from sklearn.metrics import mean_squared_error from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import LabelBinarizer from sklearn.svm import SVR from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import AdaBoostRegressor from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, RationalQuadratic, ExpSineSquared, ConstantKernel from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.neural_network import MLPRegressor from datetime import datetime # - df_fb_metrics = pd.read_csv('dataset_Facebook.csv',sep=';') print(df_fb_metrics.info()) df_fb_metrics.replace(r'^\s*$', np.nan, regex=True, inplace = True) df_fb_metrics.replace('?', np.nan, inplace = True) print(df_fb_metrics.isnull().sum()) #Since we are going to consider Total Interactions as the output variable (being the sum of Comment , Like, Share) #we can drop Comment , Like, Share and the null values associated with them. individual_interaction_components = ['comment','like','share'] df_fb_metrics = df_fb_metrics.drop(columns=individual_interaction_components, axis=1) df_fb_metrics.head() #We now have a categorical input variable Type. df_fb_metrics["Type"].value_counts() #With just 4 different types of values we can plan to use OneHotEncoding which would add 4 new columns to our dataframe lb_style = LabelBinarizer() lb_results = lb_style.fit_transform(df_fb_metrics["Type"]) encoded_df = pd.DataFrame(lb_results, columns=lb_style.classes_) df_fb_metrics = pd.concat([df_fb_metrics,encoded_df], axis=1) cols = list(df_fb_metrics) cols.insert(1, cols.pop(cols.index('Link'))) cols.insert(2, cols.pop(cols.index('Photo'))) cols.insert(3, cols.pop(cols.index('Status'))) cols.insert(4, cols.pop(cols.index('Video'))) df_fb_metrics = df_fb_metrics.loc[:, cols] df_fb_metrics = df_fb_metrics.drop(columns=['Type'],axis=1) df_fb_metrics.head() #We now have a categorical input variable Type. df_fb_metrics["Paid"].value_counts() df_fb_metrics[df_fb_metrics.isna().any(axis=1)] #Fill missing value with most common value df_fb_metrics = df_fb_metrics.apply(lambda x: x.fillna(x.value_counts().index[0])) df_fb_metrics.shape #Removing Outliers remove records that are above the 90th percentile outlier_cut_off_value = np.percentile(df_fb_metrics['Total Interactions'],90) df_fb_metrics = df_fb_metrics[df_fb_metrics['Total Interactions']<outlier_cut_off_value] df_fb_metrics.shape #Preparing the training and testing dataset. X_train, X_test, y_train, y_test = train_test_split(df_fb_metrics.iloc[:, 0:18].values, df_fb_metrics.iloc[:, 18].values, test_size=0.33, random_state=0) #Standardising the data set scaler = StandardScaler() scaler.fit(X_train) x_train_scaled = scaler.transform(X_train) x_test_scaled = scaler.transform(X_test) plt.figure(figsize=(12,10)) df_fb_metrics_corr = df_fb_metrics.corr() mask = np.zeros_like(df_fb_metrics_corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True sns.heatmap(df_fb_metrics_corr,annot=True,cbar=False, mask=mask) # + #Running the dataset across various regressors # - def svr_param_selection(X, y, X_test, y_test, nfolds): Kernels = ['linear', 'poly', 'rbf'] Cs = [0.001, 0.01, 0.1, 1] Gammas = [0.001, 0.01, 0.1] param_grid = {'kernel':Kernels, 'C': Cs, 'gamma' : Gammas} grid_search = GridSearchCV(SVR(), param_grid, cv=nfolds, n_jobs=-1, iid=False) grid_search.fit(X, y) print('SVR MSE Score for training data: '+str(grid_search.best_score_)) print('SVR With Parameters: '+str(grid_search.best_params_)) print('SVR coefficient of determination R^2 on test data: '+str(grid_search.best_estimator_.score(X_test, y_test))) y_pred = grid_search.best_estimator_.predict(X_test) print('MSE for SVR on test set: '+str(mean_squared_error(y_test, y_pred))) return grid_search.best_params_ def random_forest_regressor_param_selection(X, y, X_test, y_test, nfolds): Estimators = np.arange(1,100,15) Max_features = ['auto', 'sqrt'] Min_samples_leafs = np.linspace(0.01, 0.05, 5, endpoint=True) param_grid = {'n_estimators': Estimators, 'max_features': Max_features, 'min_samples_leaf': Min_samples_leafs} grid_search = GridSearchCV(RandomForestRegressor(random_state=0), param_grid, cv=nfolds, n_jobs=-1, iid=False) grid_search.fit(X, y) print('RandomForestRegressor MSE Score for training data: '+str(grid_search.best_score_)) print('RandomForestRegressor With Parameters: '+str(grid_search.best_params_)) print('Random Forest coefficient of determination R^2 on test data: '+str(grid_search.best_estimator_.score(X_test, y_test))) y_pred = grid_search.best_estimator_.predict(X_test) print('MSE for Random Forest Regressor on test set: '+str(mean_squared_error(y_test, y_pred))) return grid_search.best_params_ def decision_tree_regressor_param_selection(X, y, X_test, y_test, nfolds): Max_features = ['auto', 'sqrt'] Min_samples_leafs = np.linspace(0.01, 0.05, 5, endpoint=True) param_grid = {'max_features': Max_features, 'min_samples_leaf': Min_samples_leafs} grid_search = GridSearchCV(DecisionTreeRegressor(random_state=0), param_grid, cv=nfolds, n_jobs=-1, iid=False) grid_search.fit(X, y) print('DecisionTreeRegressor MSE Score for training data: '+str(grid_search.best_score_)) print('DecisionTreeRegressor With Parameters: '+str(grid_search.best_params_)) print('Decision Tree coefficient of determination R^2 on test data: '+str(grid_search.best_estimator_.score(X_test, y_test))) y_pred = grid_search.best_estimator_.predict(X_test) print('MSE for Decision Tree Regressor on test set: '+str(mean_squared_error(y_test, y_pred))) return grid_search.best_params_ def ada_boost_regressor_param_selection(X, y, X_test, y_test, nfolds): Estimators = np.arange(1,100,15) Learning_rates = [0.01,0.05,0.1,0.3] Losses = ['linear', 'square', 'exponential'] param_grid = {'n_estimators': Estimators, 'learning_rate': Learning_rates, 'loss': Losses} grid_search = GridSearchCV(AdaBoostRegressor(base_estimator=DecisionTreeRegressor(random_state=0),random_state=0), param_grid, cv=nfolds, n_jobs=-1, iid=False) grid_search.fit(X, y) print('AdaBoostRegressor MSE Score for training data: '+str(grid_search.best_score_)) print('AdaBoostRegressor With Parameters:'+str(grid_search.best_params_)) print('AdaBoost Regressor coefficient of determination R^2 on test data: '+str(grid_search.best_estimator_.score(X_test, y_test))) y_pred = grid_search.best_estimator_.predict(X_test) print('MSE for AdaBoost Regressor on test set: '+str(mean_squared_error(y_test, y_pred))) return grid_search.best_params_ def gaussian_regressor_param_selection(X, y, X_test, y_test, nfolds): kernel_rbf = ConstantKernel(1.0, constant_value_bounds="fixed") * RBF(1.0, length_scale_bounds="fixed") kernel_rq = ConstantKernel(1.0, constant_value_bounds="fixed") * RationalQuadratic(alpha=0.1, length_scale=1) kernel_expsine = ConstantKernel(1.0, constant_value_bounds="fixed") * ExpSineSquared(1.0, 5.0, periodicity_bounds=(1e-2, 1e1)) Kernels = [kernel_rbf, kernel_rq, kernel_expsine] param_grid = {'kernel': Kernels} grid_search = GridSearchCV(GaussianProcessRegressor(random_state=0), param_grid, cv=nfolds, n_jobs=-1, iid=False) grid_search.fit(X, y) print('GaussianRegressor MSE Score for training data: '+str(grid_search.best_score_)) print('GaussianRegressor With Parameters:'+str(grid_search.best_params_)) print('Gaussian Regressor coefficient of determination R^2 on test data: '+str(grid_search.best_estimator_.score(X_test, y_test))) y_pred = grid_search.best_estimator_.predict(X_test) print('MSE for Gaussian Regressor on test set: '+str(mean_squared_error(y_test, y_pred))) return grid_search.best_params_ def linear_regressor_param_selection(X, y, X_test, y_test, nfolds): param_grid = {'fit_intercept':[True,False], 'normalize':[True,False], 'copy_X':[True, False]} grid_search = GridSearchCV(LinearRegression(), param_grid, cv=nfolds, n_jobs=-1, iid=False) grid_search.fit(X, y) print('LinearRegressor MSE Score for training data: '+str(grid_search.best_score_)) print('LinearRegressor With Parameters:'+str(grid_search.best_params_)) print('Linear Regressor coefficient of determination R^2 on test data: '+str(grid_search.best_estimator_.score(X_test, y_test))) y_pred = grid_search.best_estimator_.predict(X_test) print('MSE for LinearRegressor on test set: '+str(mean_squared_error(y_test, y_pred))) return grid_search.best_params_ def neural_network_regressor_param_selection(X, y, X_test, y_test, nfolds): Hidden_Layer_Sizes = [1, 5, (5,5), 10, (10,5)] Activations = ['logistic', 'tanh', 'relu'] param_grid = {'hidden_layer_sizes': Hidden_Layer_Sizes, 'activation': Activations} grid_search = GridSearchCV(MLPRegressor(max_iter=1000,learning_rate='adaptive',solver='lbfgs',random_state=0), param_grid, cv=nfolds, n_jobs=-1, iid=False) grid_search.fit(X, y) print('NeuralNetworkRegressor MSE Score for training data: '+str(grid_search.best_score_)) print('NeuralNetworkRegressor With Parameters:'+str(grid_search.best_params_)) print('Neural Network Regressor coefficient of determination R^2 on test data: '+str(grid_search.best_estimator_.score(X_test, y_test))) y_pred = grid_search.best_estimator_.predict(X_test) print('MSE for NeuralNetwork Regressor on test set: '+str(mean_squared_error(y_test, y_pred))) return grid_search.best_params_ #Using the 3-Fold HyperParam Search to evaluate the best hyperparams for each model print("now ="+str(datetime.now())) svr_best_param = svr_param_selection(x_train_scaled, y_train, x_test_scaled, y_test, 3) print("now ="+str(datetime.now())) random_forest_best_param = random_forest_regressor_param_selection(x_train_scaled, y_train, x_test_scaled, y_test, 3) print("now ="+str(datetime.now())) decision_tree_best_param = decision_tree_regressor_param_selection(x_train_scaled, y_train, x_test_scaled, y_test, 3) print("now ="+str(datetime.now())) ada_boost_best_param = ada_boost_regressor_param_selection(x_train_scaled, y_train, x_test_scaled, y_test, 3) print("now ="+str(datetime.now())) linear_best_param = linear_regressor_param_selection(x_train_scaled, y_train, x_test_scaled, y_test, 3) print("now ="+str(datetime.now())) neural_network_best_param = neural_network_regressor_param_selection(x_train_scaled, y_train, x_test_scaled, y_test, 3) print("now ="+str(datetime.now())) #gaussian_best_param = gaussian_regressor_param_selection(X_train_scaled, y_train, X_test_scaled, 3) #print("now ="+str(datetime.now())) terminación # https://es.wikipedia.org/wiki/Coeficiente_de_determinaci%C3%B3n from tensorflow.keras import backend as K def coeff_determination(y_true, y_pred): SS_res = K.sum(K.square( y_true-y_pred )) SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) ) return ( 1 - SS_res/(SS_tot + K.epsilon()) ) # + colab={} colab_type="code" id="gr_oRwaNpvlD" # %matplotlib inline import matplotlib.pyplot as plt def plot_hist(history): plt.subplot(121) plt.plot(history.history['mean_absolute_error']) plt.plot(history.history['val_mean_absolute_error']) plt.title('model MSE') plt.ylabel('MSE') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.ylim((0, 2)) plt.subplot(122) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.ylim((0, 2)) plt.show() # + [markdown] colab_type="text" id="ndxwZkEvuXLz" # ### Transfer Learning # + colab={"base_uri": "https://localhost:8080/", "height": 366} colab_type="code" executionInfo={"elapsed": 136897, "status": "ok", "timestamp": 1539932167453, "user": {"displayName": "Matias Grinberg", "photoUrl": "", "userId": "00077740590691160022"}, "user_tz": 180} id="RyNY1wX4F7jp" outputId="041e44ce-b52d-4599-f955-93be435ad882" # ~2 min # Bajamos embeddings fasttext preentrenados # !wget http://dcc.uchile.cl/~jperez/word-embeddings/fasttext-sbwc.3.6.e20.vec.gz # !yes n | gunzip fasttext-sbwc.3.6.e20.vec.gz # + colab={"base_uri": "https://localhost:8080/", "height": 66} colab_type="code" executionInfo={"elapsed": 66512, "status": "ok", "timestamp": 1539932260802, "user": {"displayName": "Matias Grinberg", "photoUrl": "", "userId": "00077740590691160022"}, "user_tz": 180} id="leKcXDbWF61_" outputId="3b0a9a12-ff0e-4706-c214-8765cbcee731" # %%time import os # Leemos los embeddings y armamos un diccionario embedding_dim = 300 emb_dir = os.getcwd() embeddings_index = {} f = open(os.path.join(emb_dir, 'fasttext-sbwc.3.6.e20.vec')) for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() print('Found %s word vectors.' % len(embeddings_index)) # + colab={} colab_type="code" id="ycqSawSnLKJ2" # Volcamos el diccionario en un numpy array para usar con nuestro modelo embedding_matrix = np.zeros((max_words, embedding_dim)) for word, i in word_index.items(): embedding_vector = embeddings_index.get(word) if i < max_words: if embedding_vector is not None: # Las palabras no encontradas en el índice serán todos ceros embedding_matrix[i] = embedding_vector # + [markdown] colab_type="text" id="-vd5o2uSuIE4" # #### Callbacks # + colab={"base_uri": "https://localhost:8080/", "height": 453} colab_type="code" executionInfo={"elapsed": 8727, "status": "ok", "timestamp": 1539932025252, "user": {"displayName": "Matias Grinberg", "photoUrl": "", "userId": "00077740590691160022"}, "user_tz": 180} id="h8EU1CzFiYkY" outputId="2465fabe-7bde-4b6c-bcec-0c047d5745e9" # !pip install python-telegram-bot # + colab={} colab_type="code" id="7b6l0Ihpl-6G" # Config de telegram config= { 'token': '', # bot token 'telegram_id': 0, # telegram_id } # + colab={"base_uri": "https://localhost:8080/", "height": 83} colab_type="code" executionInfo={"elapsed": 10718, "status": "ok", "timestamp": 1539932278522, "user": {"displayName": "Matias Grinberg", "photoUrl": "", "userId": "00077740590691160022"}, "user_tz": 180} id="GeYgVTUAb451" outputId="262260e6-50c3-4d16-e1ef-a991309b2f9b" # !git clone https://github.com/qubvel/keras_telegram_callback.git # !cd keras_telegram_callback # + colab={} colab_type="code" id="SNA4fkkxeGvT" from keras_telegram_callback.callbacks import TelegramCallback from tensorflow.keras.callbacks import EarlyStopping, TensorBoard, ReduceLROnPlateau, ModelCheckpoint, Callback from tensorflow.keras import backend as K # + colab={} colab_type="code" id="TJtWLfk5XV2X" class TelegramCallbackVerbose(TelegramCallback): def on_train_begin(self, logs={}): text = 'Start training model {}, {}.'.format(CODE, str(params_)) self.send_message(text) def on_epoch_end(self, epoch, logs={}): text = 'Code {}. Epoch {}.\n'.format(CODE, epoch) for k, v in logs.items(): text += '{}: {:.4f}; '.format(k, v) self.send_message(text) # + colab={} colab_type="code" id="uGotu0iMSz8X" class PrintLRCallback(Callback) : def on_epoch_begin(self, epoch, logs=None): lr = float(K.get_value(self.model.optimizer.lr)) print(" epoch={:02d}, lr={:.5f}".format( epoch, lr)) # + colab={} colab_type="code" id="OkNhAjVSOlTc" import requests as rq # Declaramos el callback para SGD con restarts url_sgdr = 'https://gist.github.com/jeremyjordan/5a222e04bb78c242f5763ad40626c452/raw/1c2ade91682976896d84a2d42f69dcd5798f5f56/sgdr.py' exec(rq.get(url_sgdr).text) # No funciona # Declaramos el optimizador AdamW #url_adamw = 'https://github.com/shaoanlu/AdamW-and-SGDW/raw/master/AdamW.py' #exec(rq.get(url_adamw).text) # + colab={} colab_type="code" id="EgXPMkL0pvkG" from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Input, Dense, Flatten, Embedding, Dropout, Concatenate, BatchNormalization, GRU, LSTM, TimeDistributed from tensorflow.keras import regularizers def make_model(X_train, y_train, X_val, y_val, params): global CODE CODE = np.random.randint(0,1000000) tab_model = Sequential() tab_model.add(Dense(params['dense1_n'], input_shape = (4,), activation = params['dense1_act'])) tab_model.add(Dropout(params['dense1_drop'])) tab_model.add(Dense(params['dense2_n'], activation = params['dense2_act'], name='dense2')) tab_model.add(Dropout(params['dense2_drop'])) if params['deep_input'] == True: tab_model.add(Dense(params['dense2_n'], activation = params['dense2_act'])) tab_model.add(Dropout(params['dense2_drop'])) tab_inp = Input(shape=(4, ), name = 'tab') tab_out = tab_model(tab_inp) txt_inp = Input(shape=(max_len,), name = 'txt') emb = Embedding(input_dim = max_words, output_dim=embedding_dim, input_length=max_len, name = 'emb')(txt_inp) x0_ = LSTM(params['lstm1_n'], return_sequences=True)(emb) x0 = Dropout(params['lstm1_drop'])(x0_) x0 = LSTM(params['lstm2_n'], return_sequences= params['deep_lstm'])(x0) x0 = Dropout(params['lstm2_drop'])(x0) if params['deep_lstm'] == True: x0 = LSTM(params['lstm3_n'])(x0) x0 = Dropout(params['lstm3_drop'])(x0) x1 = Flatten()(emb) x1 = Dense(params['dense3_n'], activation = params['dense3_act'])(x1) x1 = Dropout(params['dense3_drop'])(x1) if params['deep'] == True: x1 = Dense(params['dense4_n'], params['dense4_act'])(x1) x1 = Dropout(params['dense4_drop'])(x1) x = Concatenate(axis=-1)([x0, x1]) if params['deep'] == True: x = Dense(params['dense5_n'], params['dense5_act'])(x) x = Dropout(params['dense5_drop'])(x) txt_out = Dense(params['dense6_n'], params['dense6_act'])(x) if params['residual']: conc = Concatenate(axis=-1)([tab_out, txt_out, x0]) else: conc = Concatenate(axis=-1)([tab_out, txt_out]) x = Dense(params['dense8_n'], params['dense8_act'])(conc) x = Dropout(params['dense8_drop'])(x) if params['deep_output'] == True: x = Dense(params['dense9_n'], params['dense9_act'])(x) x = Dropout(params['dense9_drop'])(x) pred = Dense(1, activation='linear')(x) model = Model(inputs=[tab_inp, txt_inp], outputs=[pred]) print_lr = PrintLRCallback() tel = TelegramCallbackVerbose(config) reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=2, min_lr=0.00005, verbose=1) early = EarlyStopping(monitor='val_loss', patience=5, verbose=1) tensorb = TensorBoard(log_dir= log_path) check = ModelCheckpoint('modelcheck{}.h5'.format(CODE), verbose=1, save_best_only=True) sgdr = SGDRScheduler(min_lr=1e-5, max_lr=1e-2, steps_per_epoch=156) model.get_layer('emb').set_weights([embedding_matrix]) model.get_layer('emb').trainable = False model.compile(optimizer= params['optimizer'](lr=params['lr'], amsgrad=params['amsgrad'], epsilon = params['epsilon']), loss=params['loss'], metrics=['mae', coeff_determination]) history = model.fit([X_train[0], X_train[1]], y_train, epochs=params['epochs'], batch_size=params['batch'], validation_data= ([X_val[0], X_val[1]], y_test), callbacks = [tel, reduce_lr, early, tensorb, check, print_lr, sgdr] ) tel.bot.send_document(document=open('modelcheck{}.h5'.format(CODE), 'rb'), chat_id=119657272) return model, history # + colab={} colab_type="code" id="6uWEJGQwtm6z" X_train = [X_num_train, X_txt_train] X_test = [X_num_test, X_txt_test] # + colab={} colab_type="code" id="2m9v2akBTA9z" # Random Search from tensorflow.keras.optimizers import SGD, Adam # + colab={} colab_type="code" id="vGXIZs9PrLDJ" params = {'batch': [128, 256, 512], 'deep': [0, 1], 'deep_input': [0, 1], 'deep_lstm': [0, 1], 'residual': [0, 1], 'deep_output': [0, 1], 'dense1_act': 'tanh', 'dense1_drop': (0, 0.5, 5), 'dense1_n': [32, 64, 96], 'dense2_act': 'relu', 'dense2_drop': (0.1, 0.4, 5), 'dense2_n': 32, 'dense3_act': 'tanh', 'dense3_drop': (0.1, 0.6, 6), 'dense3_n': [8,16,32], 'dense4_act': 'relu', 'dense4_drop': 0.03333333333333333, 'dense4_n': [32, 64, 96], 'dense5_act': 'tanh', 'dense5_drop': 0.1, 'dense5_n': [32, 64, 96], 'dense6_act': 'relu', 'dense6_drop': (0, 0.3, 6), 'dense6_n': 32, 'dense7_act': 'tanh', 'dense7_drop': 0.03333333333333333, 'dense7_n': 32, 'dense8_act': ['relu', 'tanh'], 'dense8_drop': 0, 'dense8_n': [32, 64, 96], 'dense8_act': ['relu', 'tanh'], 'dense8_drop': 0, 'dense8_n': [32, 64, 96], 'dense9_act': ['relu', 'tanh'], 'dense9_drop': 0, 'dense9_n': [32, 64, 96], 'epochs': 18, 'kernel_regularizer': 0., 'loss': 'mse', 'lstm1_drop': 0.2, 'lstm1_n': [8,16,32], 'lstm2_drop': 0.03333333333333333, 'lstm2_n': [8,16,32], 'lstm3_drop': 0.06666666666666667, 'lstm3_n': [4,8,16], 'max_val_coeff': 0.8245, 'min_val_loss': 0.1692, 'epsilon' : 1e-08, 'amsgrad': [True, False], 'lr': [0.1, 0.01, 0.005, 0.002, 0.001, 0.0005], 'optimizer': [Adam]} # + colab={} colab_type="code" id="R_XqT-wNZOQ5" scores = [] for i in range(1, 100): params_ = {} for k in params: if type(params[k]) == tuple: params_[k] = np.random.choice(np.linspace(*params[k])) elif type(params[k]) == list: params_[k] = np.random.choice(params[k]) else: params_[k] = params[k] model, history = make_model(X_train = X_train, y_train = y_train, X_val = X_test, y_val= y_test, params=params_) scores.append((history, params_))
22,645
/project5_Popularity Prediction on Twitter/codes/Q1.5.ipynb
132aac35895abeeb4e8f41661989c973389587f7
[]
no_license
Farrraway/Large-Scale-Data-Mining
https://github.com/Farrraway/Large-Scale-Data-Mining
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
63,467
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Step1: preprocessing test data from raw data # + import json import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime, time import pytz # --------------------- preprocessing test data (raw -> csv) ----------------------- # # define paths files_raw = ['test_data/sample1_period1.txt', 'test_data/sample2_period2.txt', 'test_data/sample3_period3.txt', 'test_data/sample4_period1.txt', 'test_data/sample5_period1.txt', 'test_data/sample6_period2.txt', 'test_data/sample7_period3.txt', 'test_data/sample8_period1.txt', 'test_data/sample9_period2.txt', 'test_data/sample10_period3.txt'] # calculate statistics of each file def cal_statistics(file): date = [] time = [] tweet_count = [] followers_count = [] retweet_count = [] url_count = [] author_time = {} # name+nick : date : set(time) authors_count = [] mentions_count = [] rank_score = [] hashtag_count = [] # extract data with open(file, 'r') as cur_file: for line in cur_file: data = json.loads(line) # date and time timestamp = data['firstpost_date'] pst_tz = pytz.timezone('US/Pacific') timestamp = str(datetime.datetime.fromtimestamp(int(timestamp), pst_tz)) date_split = timestamp[0:10].split('-') cur_date = int(date_split[0]+date_split[1]+date_split[2]) date.append(cur_date) cur_time = int(timestamp[11:13]) time.append(cur_time) tweet_count.append(1) followers_count.append(data['author']['followers']) retweet_count.append(data['metrics']['citations']['total']) url_count.append(len(data['tweet']['entities']['urls'])) # unique authors author_name = data['author']['name']+'+'+data['author']['nick'] if author_name in author_time: ori_ = author_time[author_name] if cur_date in ori_: ori_times = ori_[cur_date] # set if cur_time in ori_times: authors_count.append(0) else: authors_count.append(1) ori_times.add(cur_time) else: authors_count.append(1) new_times = set() new_times.add(cur_time) ori_[cur_date] = new_times else: authors_count.append(1) new_times = set() new_times.add(cur_time) new_dates = {} new_dates[cur_date] = new_times author_time[author_name] = new_dates mentions_count.append(len(data['tweet']['entities']['user_mentions'])) rank_score.append(data['metrics']['ranking_score']) hashtag_count.append(data['title'].count('#')) df = pd.DataFrame({ 'tweet' : tweet_count, 'date' : date, 'time' : time, 'followers' : followers_count, 'retweets' : retweet_count, 'urls' : url_count, 'authors' : authors_count, 'mentions' : mentions_count, 'ranking score' : rank_score, 'hashtags' : hashtag_count }, columns = ['tweet', 'date', 'time', 'followers', 'retweets', 'urls', 'authors', 'mentions', 'ranking score', 'hashtags']) df.to_csv('extracted_data/Q1.5_'+file[10:-4]+'.csv', index = False) # extract data from each file for file in files_raw: cal_statistics(file) print ('Raw test data has been done!') # --------------------- preprocessing test data (csv -> hourly grouped csv) ----------------------- # # define paths files_hour = ['extracted_data/Q1.5_sample1_period1.csv', 'extracted_data/Q1.5_sample2_period2.csv', 'extracted_data/Q1.5_sample3_period3.csv', 'extracted_data/Q1.5_sample4_period1.csv', 'extracted_data/Q1.5_sample5_period1.csv', 'extracted_data/Q1.5_sample6_period2.csv', 'extracted_data/Q1.5_sample7_period3.csv', 'extracted_data/Q1.5_sample8_period1.csv', 'extracted_data/Q1.5_sample9_period2.csv', 'extracted_data/Q1.5_sample10_period3.csv'] # load and process data from each test file def load_and_process(file): # process and groupby data data = pd.read_csv(file) data.columns = ['tweet', 'date', 'time', 'followers', 'retweets', 'urls', 'authors', 'mentions', 'ranking score', 'hashtags'] df = data.groupby(['date', 'time']).agg({'time' : np.max, 'tweet' : np.sum, 'retweets' : np.sum, 'followers' : np.sum, 'urls' : np.sum, 'authors' : np.sum, 'mentions' : np.sum, 'ranking score' : np.sum}) df.to_csv('extracted_data/Q1.5_hourly_'+file[20:-4]+'.csv', index=False) display(df) return df # linear regression model on each file for file in files_hour: load_and_process(file) print ('Each test data file has been grouped hourly!') # - # # Step2: fit best model on train data for each period # + import statsmodels.api as sm from sklearn import svm from sklearn.neural_network import MLPRegressor from statsmodels.regression.linear_model import RegressionResults from IPython.display import display from sklearn.metrics import mean_absolute_error # seperate aggregated train data into three period # 1. Before Feb. 1, 8:00 a.m. # 2. Between Feb. 1, 8:00 a.m. and 8:00 p.m. # 3. After Feb. 1, 8:00 p.m. def seperate(df): periods = [] periods.append(df.query('date < 20150201 or (date == 20150201 and time < 8)')) periods.append(df.query('date == 20150201 and time >= 8 and time <= 20')) periods.append(df.query('date > 20150201 or (date == 20150201 and time > 20)')) return periods # using the best model in 1.4.2 (all are LR) to train each period (35 features & 28 features for period 1) def regression_model_for_periods(periods): period_model = {} # 1 train model for each period (35 features) for i in range(3): # 3 periods period = periods[i] print (len(period.index)) input_arr = [] index_start = 0 for j in range(index_start, index_start+len(period.index)-5): # n-5 points cur_input = [] for k in range(5): # each point has 35 features for p in range(2,9): # append each column cur_input.append(period.iloc[j+k, p]) input_arr.append(cur_input) index_start = index_start + len(period.index) output_arr = period.loc[period.index[5]:, 'tweet'].values results = sm.OLS(output_arr, input_arr).fit() # if (i == 0): # results = svm.SVC(gamma=6) # results.fit(input_arr, output_arr) # else: # results = sm.OLS(output_arr, input_arr).fit() period_model[str(i+1)] = results # 2 train model for period 1 with 28 features period1 = periods[0] input_arr_ = [] for j in range(0,len(period1.index)-4): # n-4 points cur_input = [] for k in range(4): # each point has 28 features for p in range(2,9): # append each column cur_input.append(period1.iloc[j+k, p]) input_arr_.append(cur_input) output_arr_ = period1.loc[period1.index[4]:, 'tweet'].values results_ = sm.OLS(output_arr_, input_arr_).fit() # results_ = svm.SVC(gamma=6) # results_.fit(input_arr_, output_arr_) period_model['4'] = results_ return period_model # load data from hourly grouped aggregated train data df = pd.read_csv('extracted_data/Q1.4_#combine.csv') df.columns = ['date', 'time', 'tweet', 'retweets', 'followers', 'urls', 'authors', 'mentions', 'ranking score', 'hashtags'] periods = seperate(df) period_model = regression_model_for_periods(periods) # - # # Step3: predict on each test file using corresponding model # + # define paths files = ['extracted_data/Q1.5_hourly_sample1_period1.csv', 'extracted_data/Q1.5_hourly_sample2_period2.csv', 'extracted_data/Q1.5_hourly_sample3_period3.csv', 'extracted_data/Q1.5_hourly_sample4_period1.csv', 'extracted_data/Q1.5_hourly_sample5_period1.csv', 'extracted_data/Q1.5_hourly_sample6_period2.csv', 'extracted_data/Q1.5_hourly_sample7_period3.csv', 'extracted_data/Q1.5_hourly_sample8_period1.csv', 'extracted_data/Q1.5_hourly_sample9_period2.csv', 'extracted_data/Q1.5_hourly_sample10_period3.csv'] # predict on test data def predict_on_test_data(file, period_model, df): period = file[-1] input_arr = [] predicted_output = None results = period_model[period] cur_input = [] if file[6] == '8': # 4-hour window for i in range(4): for p in range(1,8): # append each column cur_input.append(df.iloc[i, p]) results = period_model['4'] else: for i in range(5): for p in range(1,8): # append each column cur_input.append(df.iloc[i, p]) input_arr.append(cur_input) predicted_output = results.predict(input_arr) return predicted_output test_data = [] predict = [] true = [] error = [] # predict on each test file for file in files: test_data.append(file[27:-4]) # load data df = pd.read_csv(file) df.columns = ['time', 'tweet', 'retweets', 'followers', 'urls', 'authors', 'mentions', 'ranking score'] # predict predicted_output = predict_on_test_data(file[27:-4], period_model, df) predict.append(predicted_output[0]) # relative error true_value = df.loc[df.index[len(df.index)-1], 'tweet':'tweet'].values true.append(true_value[0]) rel_error = abs(predicted_output-true_value)/true_value error.append(rel_error[0]) res = pd.DataFrame({ 'test file' : test_data, 'predicted' : predict, 'true value' : true, 'relative error' : error }, columns = ['test file', 'predicted', 'true value', 'relative error']) display(res) # -
10,354
/ml/source/Day3/tensorflow_demo4.ipynb
cde09ca2d8513cbfc5c965a26d878c2b2298c358
[]
no_license
naxvinci/deeplearningTIL
https://github.com/naxvinci/deeplearningTIL
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
8,478
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # X 와 Y 의 상관관계를 분석하는 기초적인 선형 회귀 모델을 만들고 실행해봅니다. import tensorflow as tf x_data = [1, 2, 3] y_data = [1, 2, 3] W = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) # 변수 // 웨이트값이랑 바이어스를 균등분포를 통해 랜덤값으로 바꿔 시작..? b = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) # name: 나중에 텐서보드등으로 값의 변화를 추적하거나 살펴보기 쉽게 하기 위해 이름을 붙여줍니다. X = tf.placeholder(tf.float32, name="X") # 비어있던 공간에 데이터를 할당 Y = tf.placeholder(tf.float32, name="Y") print(X) print(Y) # 아직 세션으로 실행한게 아니라서 얘는 tensor입니다~ 라고만 뜬다 # + # X 와 Y 의 상관 관계를 분석하기 위한 가설 수식을 작성합니다. # y = W * x + b # W 와 X 가 행렬이 아니므로 tf.matmul(행렬곱) 이 아니라 기본 곱셈 기호를 사용했습니다. hypothesis = W * X + b # 손실 함수를 작성합니다. # mean(h - Y)^2 : 예측값과 실제값의 거리를 비용(손실) 함수로 정합니다. (손실함수가 줄어드는게 목표) cost = tf.reduce_mean(tf.square(hypothesis - Y)) # 텐서플로우에 기본적으로 포함되어 있는 함수를 이용해 경사 하강법 최적화를 수행합니다. optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1) #경우에 따라 학습률(얼마나 반영할 것인지)조절 가능. 이 경우는 10%씩 반영하겠다는 의미 # 옵티마이저관련 패키지는 train안에 있다넹 # 비용을 최소화 하는 것이 최종 목표 train_op = optimizer.minimize(cost) # 세션을 생성하고 초기화합니다. with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # 최적화를 100번 수행합니다. for step in range(100): # sess.run 을 통해 train_op 와 cost 그래프를 계산합니다. # 이 때, 가설 수식에 넣어야 할 실제값을 feed_dict 을 통해 전달합니다. _, cost_val = sess.run([train_op, cost], feed_dict={X: x_data, Y: y_data}) #반환값의 두번째만 받기 위해 앞의 것은 안받겠다는 의미 (_,) print(step, cost_val, sess.run(W), sess.run(b)) # cost_val(오류값) : 적을수록 좋다 # 최적화가 완료된 모델에 테스트 값을 넣고 결과가 잘 나오는지 확인해봅니다. print("\n=== Test ===") print("X: 5, Y:", sess.run(hypothesis, feed_dict={X: 5})) print("X: 2.5, Y:", sess.run(hypothesis, feed_dict={X: 2.5})) # - put("enter a number: ")) fac = 1 for i in range(1, num + 1): fac = fac * i print("factorial of ", num, " is ", fac) # - num=int(input("enter value")) fact = 1 i = 1 while i <= num : fact = fact * i i = i + 1 print("factorial of",num,"is",fact) # + # Python program to display the Fibonacci sequence def recur_fibo(n): if n <= 1: return n else: return(recur_fibo(n-1) + recur_fibo(n-2)) nterms = 10 # check if the number of terms is valid if nterms <= 0: print("Plese enter a positive integer") else: print("Fibonacci sequence:") for i in range(nterms): print(recur_fibo(i)) # - n = int(input("enter value=")) fibonacci=[0,1] if n > 2: for i in range(2,n): n1=fibonacci[i-1]+fibonacci[i-2] fibonacci.append(n1) print (fibonacci) # + #Operators in Python #Arthimetic Operators x = 10 y = 5 print("x + y =",x + y) print("x - y =",x - y) print("x * y = ",x * y) print("x / y =",x / y) print("x // y =",x // y) print("x**y =", x**y) #Comparisons Operators print("x > y is", x > y) print("x < y is",x < y) print("x==y is", x == y) print("x != y is", x != y) print(" x >= y is", x >= y) print("x <= y is", x <= y) # + #Logical and Operator n1 = 10 n2 = 20 n3 = 30 if n1 > n2 and n1 > n3: print("n1 is the largest number") if n2 > n1 and n2 > n3: print("n2 is the largest number") if n3 > n1 and n3 > n2: print("n3 is the largest number") #logical OR operator ch = input("enter character:") if(ch=='A' or ch=='a' or ch=='E' or ch=='e' or ch=='I' or ch=='i' or ch=='O' or ch=='o' or ch=='U' or ch=='u'): print(ch," is a vowel") else: print(ch,"is a constant") #Python Identity Operators x = ["apple", "banana"] y = ["apple", "banana"] z = x print(x is z) print(x is y) print(x is not y) print(x is not z) #Python Membership Operators x = 3 y =21 list1= [10,20,6,3,21,89] print( x in list1) print(y in list1) print(y not in list1) # # - _test[y_hat_test < 0] = 0 print('Train model results') regression_results(y_train, y_hat_train, X_train) print('Test model results') regression_results(y_test, y_hat_test, X_test) # ## 2 Feature Model Exploration y = df['price'] X = df.drop(columns=drops, axis=1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2) poly = PolynomialFeatures(2) #Rebuild Model with Training Data: X_fin_train = poly.fit_transform(X_train) reg_poly_train = LinearRegression().fit(X_fin_train, y_train) y_hat_train = reg_poly_train.predict(poly.fit_transform(X_train)) y_hat_train[y_hat_train < 0] = 0 #Build Model with Testing Data: X_fin_test = poly.fit_transform(X_test) reg_poly_test = LinearRegression().fit(X_fin_test, y_test) y_hat_test = reg_poly_test.predict(poly.fit_transform(X_test)) y_hat_test[y_hat_test < 0] = 0 print('Train model results') regression_results(y_train, y_hat_train, X_train) print('Test model results') regression_results(y_test, y_hat_test, X_train) # Looks like the two feature model is the way to go! # Building the final model: X_fin = poly.fit_transform(X) reg_poly = LinearRegression().fit(X_fin, y) y_hat = reg_poly.predict(poly.fit_transform(X)) y_hat[y_hat < 0] = 0 plt.figure(figsize = (14,10)) plt.scatter(y, y_hat, alpha=0.2, color='blue', edgecolors='gray') plt.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4) plt.xlabel('Measured Price', size=12) plt.ylabel('Predicted Price', size=12) plt.title("Engineered Model - Logarithmic Target", size=14) plt.show();
5,443
/Untitled0.ipynb
1b4cca849b97e365e7f3249cbb1d9edddfd14721
[]
no_license
sireesha-1/pythonb7
https://github.com/sireesha-1/pythonb7
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
2,866
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Feature Extraction # - One-hot Encoding # - Bag of Words # - Ngram # - TF-IDF # - Word2Vec # #### One-hot Encoding from sklearn.preprocessing import OneHotEncoder from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer from nltk.tokenize import word_tokenize ohe = OneHotEncoder(handle_unknown = 'ignore') text = ['This is the first document.','This document is the second document.','And this is the third one.', 'Is this the first document?'] words = [word_tokenize(sent) for sent in text] # Important Step!!! print(words) tokens = [] for i in range(len(words)): tokens = tokens + words[i] tokens = list(sorted(set(tokens))) # only tokens (full tokens), set tokens(unique tokens) print(tokens) vo = [[w] for w in tokens] print(vo) X = ohe.fit_transform(vo).toarray() # one hot Encoding X ohe.inverse_transform([[1,0,0,0,0,0,0,0,0,0,0,0,0]]) ohe.transform([['sachin']]).toarray() ohe.transform([['this']]).toarray() # ### Bag of words text # word level vectorizer = CountVectorizer() X = vectorizer.fit_transform(text).toarray() print(X) vocab = vectorizer.get_feature_names() print(vocab) new_word = vectorizer.transform(['This can be the first document']).toarray() print(new_word) # Char Level vectorizer = CountVectorizer(analyzer = 'char') # to char lever ('a','b'etc) X = vectorizer.fit_transform(text).toarray() X vocab = vectorizer.get_feature_names() list(vocab) new_word = vectorizer.transform(['This can be my first document']).toarray() new_word # ### N-gram # Word level vectorizer = CountVectorizer(ngram_range = (1,2)) X = vectorizer.fit_transform(text).toarray() X vocab = vectorizer.get_feature_names() vocab # char level vectorizer = CountVectorizer(ngram_range = (1,2), analyzer = 'char') X = vectorizer.fit_transform(text).toarray() X vocab = vectorizer.get_feature_names() vocab # TF-IDF # Wordlevel vectorizer = TfidfVectorizer() X = vectorizer.fit_transform(text).toarray() X vocab = vectorizer.get_feature_names() vocab new_word = vectorizer.transform(['This can be my first document']).toarray() new_word # Char level vectorizer = TfidfVectorizer(analyzer = 'char') X = vectorizer.fit_transform(text).toarray() X vocab = vectorizer.get_feature_names() vocab # char level vectorizer = TfidfVectorizer(analyzer = 'char', ngram_range = (2,2)) X = vectorizer.fit_transform(text).toarray() X # Char level vectorizer = TfidfVectorizer(ngram_range=(2,2)) X = vectorizer.fit_transform(text).toarray() X # ## Word2Vec from gensim.models import Word2Vec from nltk.tokenize import word_tokenize from gensim.models import Phrases from gensim.models.doc2vec import Doc2Vec, TaggedDocument from gensim.models import FastText data = ['This is the first document.','This is the second second document.', 'And the third one.', 'Is this the first document?'] tokens = [word_tokenize(text) for text in data] print(tokens) model = Word2Vec(tokens, size=5,window=5,min_count=1,workers=4) model.wv.vocab model.wv['This'] model['This'] model.save('Word2Vec.model') model = Word2Vec.load('Word2Vec.model') model.wv.similar_by_vector(model.wv['This']) model.wv.similar_by_word('This') bigram_transformer = Phrases(tokens) print(bigram_transformer.vocab) model = Word2Vec(bigram_transformer[tokens], min_count=1) documents = [TaggedDocument(doc, [i]) for i, doc in enumerate(tokens)] model = Doc2Vec(documents, vector_size=5, window=2, min_count=1, workers=4) model.infer_vector(["this is"]) model.infer_vector(["this"]) model.wv.vocab model = FastText(size=4, window=3, min_count=1) model.build_vocab(sentences=tokens) model.train(sentences=tokens, total_examples=len(tokens), epochs=10) model.wv["This"] sidual) # Show the plot plt.show() interact_plot = ipywidgets.interact(interactive_residual_plot, intercept=ipywidgets.FloatSlider(value=5, min=0, max=10, step=0.1, description="Intercept"), slope=ipywidgets.FloatSlider(value=0, min=-1, max=1, step=0.05, description="Slope")); output = interact_plot.widget.children[-1] # This should prevent flickering output.layout.height = '500px' # + [markdown] id="OUKp_MjXXSIs" # ## Parameter optimization # # What are the best parameter values? We can only know this in relation to a metric like our definition of the residual $\sum_i (y_i - \hat y_i)^2$. Any best/good set of parameters are always evaluated some quality metric, often called "loss." For simple linear regression, the loss function is squared loss, as in the code above. We could image other loss functions for simple regression like using the $l_1$ distance instead of $l_2$ (squared euclidean distance). The loss function then becomes $\sum_i |y_i - \hat y_i|$. # # Maybe the most simple way of finding the best parameters could be to choose random values for the parameters and only keep the set with the lowest loss (i.e., the lowest residual). Let's see what this might look like. # + id="BPuEjg8Wmuv6" def model(x, theta): """Basic linear model""" ret = theta[0] + x*theta[1] return ret.ravel() def loss(X, y, theta, order=2): """Finds the residual given the data and the model parameters.""" if order==2: return np.sum((y - model(X, theta))**2) # l2 loss elif order==1: return np.sum(np.abs(y - model(X, theta))) # l1 loss # + colab={"base_uri": "https://localhost:8080/"} id="kbnCxM6mmu56" outputId="8a7bc2a3-07d2-47f5-8d59-d5febc821fc5" for i in range(10): theta = np.random.uniform(-10, 10, size=2) print("theta: %.1f, %.1f \tLoss: %.1f" % (theta[0], theta[1], loss(X, y, theta))) # + [markdown] id="MRP1Pv6JnSBQ" # This will take some time. There must be a better way! # # We could imagine other ways. Below, I have defined a couple of ways to do optimization. Go through them all (not necessarily in order) and try to understand what is happening. Each algorithm uses the model, data, and residual defined above. If you want to change something from above, simply load the new data or rewrite the loss function to use some other metric. Each algorithm will use your `model` and `loss` functions and return a history of parameters that can be used with the plotting functions further down in this notebook. # + id="yrwhNV67ouVj" def coordinate_descent(initial_theta, n_iterations=50, step=.25): """Coordinate descent (look at wikipedia for definition)""" theta = initial_theta.copy() history = list() for iteration in range(n_iterations): theta_changed_during_iter = False for dim in range(len(initial_theta)): for direction in [-1, 1]: new_theta = theta.copy() new_theta[dim] += direction*step while loss(X, y, new_theta) < loss(X, y, theta): theta = new_theta theta_changed_during_iter = True history.append(theta) new_theta = theta.copy() new_theta[dim] += direction*step if not theta_changed_during_iter: break return theta, np.concatenate([np.vstack(h).T for h in history]) def finite_difference_gradient_descent(initial_theta, n_iterations=20, gamma=.0001): """Gradient descent optimization, where the gradient is approximated using finite difference. Look at scipy.optimize.approx_fprime to learn more.""" from scipy.optimize import approx_fprime theta = initial_theta.copy() history = list() for iteration in range(n_iterations): gradient = approx_fprime(theta, lambda t: loss(X, y, t), epsilon=.001) theta = theta - gamma*gradient history.append(theta) return theta, np.concatenate([np.vstack(h).T for h in history]) def random_changes(initial_theta, n_iterations=200, noise_sd=.1): """Adds random noise to the parameter vector in the hope of improving the loss.""" theta = initial_theta.copy() history = [theta] for iteration in range(n_iterations): new_theta = theta + np.random.normal(scale=noise_sd, size=theta.shape) if loss(X, y, new_theta) < loss(X, y, theta): theta = new_theta history.append(theta.copy()) return theta, np.concatenate([np.vstack(h).T for h in history]) def scipy_minimize(initial_theta): """Optimization using external library. Look at the manual of scipy.optimize.minimize for specifics.""" from scipy.optimize import minimize history = [initial_theta.copy()] def store_theta_during_optimization(current_theta): history.append(current_theta) res = minimize(lambda t: loss(X, y, t), initial_theta, callback=store_theta_during_optimization) return res.x, np.concatenate([np.vstack(h).T for h in history]) # + id="njcuFRAzo8nR" def plot_loss_landscape(history): xx, yy = np.meshgrid(np.linspace(min(history[:, 0].min(), -5), max(history[:, 0].max(), 10), 100), # Intercept np.linspace(min(history[:, 1].min(), -1), max(history[:, 1].max(), 2), 100)) # Slope Z = np.apply_along_axis(lambda theta: loss(X, y, theta), 1, np.concatenate([np.vstack(xx.ravel()), np.vstack(yy.ravel())], axis=1)) Z = Z.reshape(xx.shape) fig = plt.figure(figsize=(6, 6)) ax = fig.subplots(1, 1) ax.legend(loc='lower right') # ax[0].set_title("Residual (squared error): %.1f" % residual(X, y, theta)) ax.contourf(xx, yy, np.log(Z)) ax.set_xlabel("Intercept") ax.set_ylabel("Slope") ax.plot([h[0] for h in history], [h[1] for h in history], 'r.-', alpha=.7, zorder=1, label="Path to optimum") ax.scatter(history[-1, 0], history[-1, 1], s=100, color='r', edgecolors='k', zorder=2, label="Optimum") ax.legend() fig.tight_layout() fig.show() def plot_loss(history): fig = plt.figure(figsize=(8, 4)) ax = fig.subplots(1, 1) ax.plot([loss(X, y, h) for h in history]) # ax.set_yscale('log') ax.set_ylabel("Loss") ax.set_xlabel("Step") fig.show() # + colab={"base_uri": "https://localhost:8080/", "height": 458} id="SlhJbP1kCN34" outputId="c1d2ca2e-ea6f-4831-da94-bd72104a71a7" theta = np.random.uniform(-10, 10, size=2) theta, history = random_changes(theta) plot_loss_landscape(history) # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="Vv4kcgzj9ZPj" outputId="bb6e1e58-c282-40de-e316-12a96ee06631" plot_loss(history) # + colab={"base_uri": "https://localhost:8080/", "height": 566, "referenced_widgets": ["03bd048419224dcc8e1c7a95a7c84693", "3a5c43196e2e49559bebef2564602f3f", "36097c17344d4e1da5a021db9ecb5075", "f0a93220bf4a4c88bcbfe28b7eed5be5", "8cfc824a2b224e38ade309f6c151c4b0", "912f492ba80a4ce8b8014ce8432b45a1", "ead747ee96584cacb0c380dd7add9530"]} id="1QKSRDEGezMY" outputId="07831161-6344-4370-9afd-96008bb29420" def plot_wrapper(iteration): intercept, slope = history[iteration] print("Intercept: %.1f, Slope: %.1f" % (intercept, slope)) interactive_residual_plot(intercept, slope) interact_plot = ipywidgets.interact(plot_wrapper, # iteration = ipywidgets.Play(min=0, max=len(history)-1, step=1, value=0)); iteration = ipywidgets.Play(min=0, max=len(history)-1, step=max(1, len(history)//100), value=0)); output = interact_plot.widget.children[-1] # This should prevent flickering output.layout.height = '500px' # Note that the play button doesn't always show up in colab. The clickable area is still there though. # + colab={"base_uri": "https://localhost:8080/", "height": 466} id="52MK_jKi79cv" outputId="1dfa255b-54c8-475f-d5f3-bfa10fdc459a" from mpl_toolkits.mplot3d.axes3d import Axes3D xx, yy = np.meshgrid(np.linspace(-5, 10, 100), # Intercept np.linspace(-1, 2, 100)) # Slope fig = plt.figure(figsize=(16, 8)) for order in [1, 2]: ax = fig.add_subplot(1, 2, order, projection='3d') Z = np.apply_along_axis(lambda theta: loss(X, y, theta, order=order), 1, np.concatenate([np.vstack(xx.ravel()), np.vstack(yy.ravel())], axis=1)) Z = Z.reshape(xx.shape) ax.plot_surface(xx, yy, Z, cmap='viridis', rstride=5, cstride=5) ax.set_title("$l_%i$ loss surface" % order) ax.set_xlabel("Intercept") ax.set_ylabel("Slope") ax.set_zlabel("Loss") ax.view_init(30, 120) plt.show() # + [markdown] id="KNjpWKQY7PpW" # ## Probabilistic model # + [markdown] id="fnfkT7A4-cXc" # $P(\theta \mid \mathfrak{D}) \propto P(\mathfrak{D} \mid \theta) P(\theta)$ # # $f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\prod _{i=1}^{n}f(x_{i}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right)$ # + id="_F2gb4k__WYi" def bayesian_model(x, theta): """Bayesian linear model""" ret = theta[0] + x*theta[1] return ret.ravel() from scipy.stats import norm def model_log_likelihood(X, y, theta): """""" return np.sum(norm(scale=theta[2]).logpdf(y-bayesian_model(X, theta))) # + colab={"base_uri": "https://localhost:8080/"} id="GU4aB4tuEO45" outputId="8b814ce5-49f9-40a7-ddca-b78a30684b19" print("Intercept\tSlope\tNoise\t\tLog loss") for i in range(10): theta = np.random.uniform(-10, 10, size=3) theta[2] = np.abs(theta[2]) # Theta_2 must not be negative, why? print("%.1f\t\t%.1f\t%.1f\t->\t%.1f" % (theta[0], theta[1], theta[2], model_log_likelihood(X, y, theta))) # + colab={"base_uri": "https://localhost:8080/", "height": 466} id="GczkrLnFBUbc" outputId="d211f4d6-8a00-4655-e568-f60a4b413eb7" from mpl_toolkits.mplot3d.axes3d import Axes3D xx, yy = np.meshgrid(np.linspace(-5, 10, 100), # Intercept np.linspace(-1, 2, 100)) # Slope fig = plt.figure(figsize=(16, 8)) for i, noise_level in enumerate([2, 6]): ax = fig.add_subplot(1, 2, i+1, projection='3d') thetas_on_grid = np.concatenate([np.vstack(xx.ravel()), np.vstack(yy.ravel())], axis=1) thetas_on_grid = np.append(thetas_on_grid, noise_level*np.ones((thetas_on_grid.shape[0], 1)), axis=1) Z = np.apply_along_axis(lambda theta: model_log_likelihood(X, y, theta), 1, thetas_on_grid) Z = Z.reshape(xx.shape) ax.plot_surface(xx, yy, np.exp(Z), cmap='viridis') ax.set_title("Distribution over intercept and slope with fixed noise=%.1f" % noise_level) ax.set_xlabel("Intercept") ax.set_ylabel("Slope") ax.set_zlabel("Log Likelihood") ax.view_init(30, 120) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 531} id="7nkKXg5bCa59" outputId="b1141685-374d-458f-e0b0-daed720dd4b4" from scipy.optimize import minimize history = [np.asarray([3, .5, .2])] def store_theta_during_optimization(current_theta): history.append(current_theta) res = minimize(lambda theta: -model_log_likelihood(X, y, theta), history[0], bounds=[(None, None), (None, None), (1e-3, None)], callback=store_theta_during_optimization) theta = res.x interactive_residual_plot(theta[0], theta[1]) print("MAP estimate of theta: %s" % theta) time.sleep(20) print(f'Stopped at {city_cursor}, page {int(page_cursor/10)}') # + # from collections import defaultdict # from IPython.display import clear_output # max_results_per_city = 3000 # Set this to a high-value (5000) to generate more results. # # Crawling more results, will also take much longer. First test your code on a small number of results and then expand. # results = defaultdict(list) # status = defaultdict(list) # captcha=False # for start in tqdm(range(0, max_results_per_city, 10)): # url = f"https://sg.indeed.com/jobs?q=Data+Scientist&start={start}" # result = requests.get(url) # status[city].append(result.status_code) # soup = BeautifulSoup(result.text, 'html.parser') # if 'hCaptcha' not in str(soup.prettify): # for post in soup.find_all('div', attrs={'class':'jobsearch-SerpJobCard'}): # results['title'].append(extract_job_title(post)) # results['company'].append(extract_company(post)) # results['location'].append(extract_location(post)) # results['country'].append('singapore') # results['salary'].append(extract_salary(post)) # time.sleep(25) # else: # print('Caught') # city_cursor = city # page_cursor = start # captcha=True # break # if captcha==True: # break # time.sleep(20) # print(f'Stopped at {city_cursor}, page {int(page_cursor/10)}') # - pages results pd.DataFrame(results).shape len(cities) # + # cities = ['New+York', 'Chicago', 'San+Francisco', 'Austin', 'Seattle', # 'Los+Angeles', 'Philadelphia', 'Atlanta', 'Dallas', 'Pittsburgh', # 'Portland', 'Phoenix', 'Denver', 'Houston', 'Miami'] # from collections import defaultdict # from IPython.display import clear_output # max_results_per_city = 400 # Set this to a high-value (5000) to generate more results. # # Crawling more results, will also take much longer. First test your code on a small number of results and then expand. # results = defaultdict(list) # status = defaultdict(list) # + # def scraper(cities, city_tracker, cursor): # captcha = False # results = defaultdict(list) # for i in range(city_tracker, len(cities)): # print(f'Scraping for {cities[i]}') # for start in tqdm(range(cursor, max_results_per_city, 10)): # cursor=0 # url = f"http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l={city}&start={start}" # result = requests.get(url) # soup = BeautifulSoup(result.text, 'html.parser') # if 'hCaptcha' not in str(soup.prettify): # for post in soup.find_all('div', attrs={'class':'jobsearch-SerpJobCard'}): # results['title'].append(extract_job_title(post)) # results['company'].append(extract_company(post)) # results['location'].append(extract_location(post)) # results['salary'].append(extract_salary(post)) # time.sleep(3) # else: # print('caught') # city_cursor = city # page_cursor = start # captcha = True # break # if captcha == True: # break # time.sleep(15) # return results, city_cursor, page_cursor # + # #intialise page and city trackers # max_results_per_city = 300 # page = 0 # city = 0 # page = 0 # first_set = True # while (page < max_results_per_city) and (city != len(cities)): # results, city, page = scraper(cities, city, page) # if first_set==True: # results_main = results # else: # results_main.update(results) # time.sleep(120) # - results spare_res = results.copy() results # + [markdown] focus=false id="20339c09-5032-4e27-91be-286e9b46cd13" # #### Use the functions you wrote above to parse out the 4 fields - location, title, company and salary. Create a dataframe from the results with those 4 columns. # + focus=false id="6e259594-1c52-436b-ab9e-527e071941c1" ## YOUR CODE HERE jobs_all = pd.DataFrame(results, columns = ['title', 'company', 'location', 'salary']) # - jobs_all.shape # + [markdown] focus=false id="ff98ce64-78a7-441f-a675-63464e32c834" # Lastly, we need to clean up salary data. # # 1. Only a small number of the scraped results have salary information - only these will be used for modeling. # 1. Some of the salaries are not yearly but hourly or weekly, these will not be useful to us for now. # 1. Some of the entries may be duplicated. # 1. The salaries are given as text and usually with ranges. # + [markdown] focus=false id="ff98ce64-78a7-441f-a675-63464e32c834" # #### Find the entries with annual salary entries, by filtering the entries without salaries or salaries that are not yearly (filter those that refer to hour or week). Also, remove duplicate entries. # + focus=false id="58533e57-f86b-494a-b841-e7b59c6229c6" ## YOUR CODE HERE jobs = jobs_all.drop_duplicates() jobs.dropna(inplace=True) jobs = jobs[jobs.salary.str.contains('year')] jobs.reset_index(drop=True, inplace=True) jobs # - jobs.location.unique() # + [markdown] focus=false id="7d4bc860-b214-4f75-9cd0-b234830b1ec2" # #### Write a function that takes a salary string and converts it to a number, averaging a salary range if necessary. # + focus=false id="a0f701e0-80bd-40ba-9101-4535860c0968" import re def salary_formatter(sal): split = re.findall(r'[0-9]+', sal) if len(split)>1: return sum([float(val) for val in split])/len(split) return float(split[0]) salary_formatter('$114700 - £120000 a year') # - jobs.salary = jobs.salary.apply(salary_formatter) jobs # + [markdown] focus=false id="43e71edd-210e-42b1-9336-70a931f048af" # ### Save your results as a CSV # - jobsdb = pd.read_csv('../jobs.csv') jobsdb.reset_index(drop=True) jobs.reset_index(drop=True) jobs_new = pd.concat([jobsdb, jobs], axis=0) jobs_new.drop(['Unnamed: 0'], axis=1, inplace=True) jobs_new.drop_duplicates(inplace=True) jobs_new.to_csv(path_or_buf='../jobs.csv') # + [markdown] focus=false id="243e949e-2742-40af-872e-fec475fd306c" # ### Load in the data of scraped salaries # - jobs = pd.read_csv('../jobs.csv') jobs.drop(['Unnamed: 0'], axis=1, inplace=True) jobs from forex_python.converter import CurrencyRates currency = CurrencyRates() currency.get_rate('GBP', 'USD') # # MAY NOT WORK, MAY HAVE TO JUST USE THE CURRENCY CONVERSIONS ON THE DAY OF HAND IN jobs.shape # + focus=false id="588f9845-6143-4bcc-bfd1-85d45b79303d" ## YOUR CODE HERE # + [markdown] focus=false id="c7631f51-07f2-4c79-a093-3e9bc7849a48" # ### We want to predict a binary variable - whether the salary was low or high. Compute the median salary and create a new binary variable that is true when the salary is high (above the median). # # We could also perform Linear Regression (or any regression) to predict the salary value here. Instead, we are going to convert this into a _binary_ classification problem, by predicting two classes, HIGH vs LOW salary. # # While performing regression may be better, performing classification may help remove some of the noise of the extreme salaries. We don't have to choose the `median` as the splitting point - we could also split on the 75th percentile or any other reasonable breaking point. # # In fact, the ideal scenario may be to predict many levels of salaries. # + median = jobs.salary.median() def hi_or_lo(salary): if salary < median: return 'Low' return 'High' # + focus=false id="c20d2498-151c-44c3-a453-3a333c79a0ac" ## YOUR CODE HERE jobs['median_high_low'] = jobs.salary.apply(hi_or_lo) # - jobs from geopy import Nominatim jobs[:148].country = 'usa' def geo_loc(loc): try: geolocation = Nominatim(user_agent="app").geocode(loc) return geolocation.longitude, geolocation.latitude except: return np.nan def geo_lon(loc): try: geolocation = Nominatim(user_agent="app").geocode(loc) return geolocation.longitude except: return np.nan def geo_lat(loc): try: geolocation = Nominatim(user_agent="app").geocode(loc) return geolocation.latitude except: return np.nan jobs_test = jobs.copy() jobs_test['geo'] = jobs_test.location.apply(geo_loc) jobs_test['longitude'] = jobs_test.location.apply(geo_lon) jobs_test['latitude'] = jobs_test.location.apply(geo_lat) jobs_test.isnull().sum() jobs_test.dropna(inplace=True) geo_loc('brooklyn, ny 11201') # + [markdown] focus=false id="a7afb2c0-d41e-4779-8216-91cd8dd4473f" # #### Thought experiment: What is the baseline accuracy for this model? # + focus=false id="87a17d3d-b7f4-4747-9f75-f9af1d18a174" ## YOUR CODE HERE jobs.median_high_low.value_counts(normalize=True) # - print(f'baseline score is {round(jobs.median_high_low.value_counts(normalize=True)[0], 4)}') # <!-- The baseline accuracy for the model is 0.538. --> # + [markdown] focus=false id="4fb29de2-5b98-474c-a4ad-5170b72b9aea" # ### Create a classification model to predict High/Low salary. # # # - Start by ONLY using the location as a feature. # - Use at least two different classifiers you find suitable. # - Remember that scaling your features might be necessary. # - Display the coefficients/feature importances and write a short summary of what they mean. # - Create a few new variables in your dataframe to represent interesting features of a job title (e.g. whether 'Senior' or 'Manager' is in the title). # - Incorporate other text features from the title or summary that you believe will predict the salary. # - Then build new classification models including also those features. Do they add any value? # - Tune your models by testing parameter ranges, regularization strengths, etc. Discuss how that affects your models. # - Discuss model coefficients or feature importances as applicable. # - from geopy.geocoders import Nominatim # + def lower(string): if type(string)==str: return string.lower() return string jobs = jobs.applymap(lower) # - jobs pd.DataFrame(pd.DataFrame(jobs.company.unique()).sort_values(by=0))[0].unique() # + ## YOUR CODE HERE # def key_words(title): # def title_cleaner(title): return title.replace('sr.', 'senior').replace('sr', 'senior') def senior(title): if ('senior' in title) or ('sr' in title): return 1 return 0 def manager(title): if 'manag' in title: return 1 return 0 def lead(title): if 'lead' in title: return 1 return 0 ['senior', 'manager', 'lead'] # - jobs.title = jobs.title.apply(title_cleaner) # jobs['senior'] = jobs['title'].apply(senior) # jobs['manager'] = jobs['title'].apply(manager) # jobs['lead'] = jobs['title'].apply(lead) jobs from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer from sklearn.preprocessing import OneHotEncoder from sklearn.compose import ColumnTransformer from sklearn.pipeline import make_pipeline from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression # ### For the models, I will try both voting and stacking ensemble methods. A RandomForestClassifier will be used as well as LogisticRegression with a GradientBoostClassifier # #### Test using the longitude and latitude extracted by geopy's Nominatim # ##### set up the X, y and train-test sets y = jobs_test['median_high_low'] X = jobs_test[['longitude', 'latitude']] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1, stratify=y) # ##### decision tree classifier dtc_ll = DecisionTreeClassifier().fit(X_train, y_train) dtc_ll.score(X_train, y_train) dtc_ll.score(X_test, y_test) dtc_ll.feature_importances_ # ##### logistic regression log_ll = LogisticRegression().fit(X_train, y_train) log_ll.score(X_train, y_train) log_ll.score(X_test, y_test) # #### Test using the location variable, countvectorized to find commonalities # ##### set up the X, y and train-test sets y = jobs['median_high_low'] X = jobs[['location']] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1, stratify=y) # ##### pipeline with a decision tree classifier cvec_location = CountVectorizer(stop_words='english', max_df=0.9, min_df=1) dtc_loc = DecisionTreeClassifier() col_trans = ColumnTransformer([('location', cvec_location, 'location')], remainder='passthrough') pipe_loc_dtc = make_pipeline(col_trans, dtc_loc) pipe_loc_dtc.fit(X_train, y_train) pipe_loc_dtc.score(X_train, y_train) pipe_loc_dtc.score(X_test, y_test) # ##### pipeline with a logistic regression model log_loc = LogisticRegression() pipe_loc_log = make_pipeline(col_trans, log_loc) pipe_loc_log.fit(X_train, y_train) pipe_loc_log.score(X_train, y_train) pipe_loc_log.score(X_test, y_test) feature_columns = pipe_loc_log['columntransformer'].get_feature_names() feature_columns[:5] len('location__') feature_names = [col[len('location__'):] for col in feature_columns] feature_names pipe_loc_log['logisticregression'].coef_[0] coefficients = pd.DataFrame(zip(feature_names, pipe_loc_log['logisticregression'].coef_[0]), columns=['name', 'coef']) coefficients.sort_values(by='coef', ascending=False) # ##### test using all available variables # ##### set up the X, y and train-test sets jobs_test = jobs.copy() y = jobs_test.pop('median_high_low') X = jobs_test[['title', 'company', 'location']] X = jobs_test[['title', 'company', 'location', 'country']] X X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1, stratify=y) # Since there is much variance in related job titles for the job posts, and also post codes # # Upon looking at unique values for the Company column, they seemed to be well standardised and there didn't seem much need to use NLP on the column since all instances of the same company would be recognised under one dummy heading. # # However, after reviewing the difference in scores between OneHotEncoder and CountVectorizer used on the company columns, both test and train scores whereby the Company column has been CountVectorized turn out better. Therefore NLP has been used on all text columns: Title, Company and Location. # ##### decision tree pipeline cvec_title = CountVectorizer(stop_words='english') cvec_company = CountVectorizer(stop_words='english') cvec_location = CountVectorizer(stop_words='english', max_df=0.9, min_df=1) ohe = OneHotEncoder(handle_unknown='ignore') dtc = DecisionTreeClassifier() col_trans = ColumnTransformer([('title', cvec_title, 'title'), ('company', cvec_company, 'company'), ('location', cvec_location, 'location'), ('country', ohe, ['country'])], remainder='passthrough') pipe_dtc = make_pipeline(col_trans, dtc) pipe_dtc.fit(X_train, y_train) pipe_dtc.score(X_train, y_train) pipe_dtc.score(X_test, y_test) # + # CVEC # Train # 0.9961977186311787 # Test # 0.7878787878787878 # WITHOUT COUNTRY # OHE # Train # 0.9961977186311787 # Test # 0.7727272727272727 # - # ##### logistic regression pipeline cvec_title = CountVectorizer(stop_words='english') cvec_company = CountVectorizer(stop_words='english') cvec_location = CountVectorizer(stop_words='english', max_df=0.9, min_df=1) ohe = OneHotEncoder(handle_unknown='ignore') log = LogisticRegression() col_trans = ColumnTransformer([('title', cvec_title, 'title'), ('company', cvec_company, 'company'), ('location', cvec_location, 'location'), ('country', ohe, ['country'])], remainder='passthrough') pipe_log = make_pipeline(col_trans, log) pipe_log.fit(X_train, y_train) pipe_log.score(X_train, y_train) pipe_log.score(X_test, y_test) # + # ohe company # train # 0.9581749049429658 # test # 0.7424242424242424 # WITHOUT COUNTRY # cvec # train # 0.9695817490494296 # test # 0.7575757575757576 # - # ### Now onto the Ensemble Methods from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier from sklearn.model_selection import GridSearchCV # #### Random Forest rfc = RandomForestClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=500, max_depth=20, n_jobs=-2) rfc_pipe = make_pipeline(col_trans, rfc).fit(X_train, y_train) rfc_pipe.score(X_train, y_train) rfc_pipe.score(X_test, y_test) # ##### Grid Search on Random Forest # + # rfc_pipe.get_params() # - params = {'randomforestclassifier__ccp_alpha': [0.0, 0.1], 'randomforestclassifier__max_depth': [i for i in range(7, 20)], 'randomforestclassifier__min_impurity_decrease': [0.0, 0.1], 'randomforestclassifier__min_samples_leaf': [1, 2, 3], 'randomforestclassifier__min_samples_split': [2, 3, 4, 5], 'randomforestclassifier__min_weight_fraction_leaf': [0.0], 'randomforestclassifier__n_estimators': [500, 600, 700], 'randomforestclassifier__oob_score': [True, False], 'randomforestclassifier__random_state': [0], 'randomforestclassifier__verbose': [2]} gs_rfc = GridSearchCV(rfc_pipe, params, cv=5) gs_rfc.fit(X_train, y_train) gs_rfc.best_params_ gs_rfc_model = gs_rfc.best_estimator_ gs_rfc_model.score(X_train, y_train) gs_rfc_model.score(X_test, y_test) # ##### Notes # Strangely, even given the same input parameters as the original Random Forest pipeline, the tuned model's scores are considerably lower on the training score and marginally lower on the test score. # #### Gradient Boost gbc = GradientBoostingClassifier(base_estimator=LogisticRegression(), max_depth=45, n_estimators=500) gbc_pipe = make_pipeline(col_trans, gbc).fit(X_train, y_train) gbc_pipe.score(X_train, y_train) gbc_pipe.score(X_test, y_test) # Start by ONLY using the location as a feature. DONE # # Use at least two different classifiers you find suitable. DONE # # Remember that scaling your features might be necessary. NOT DONE # # Display the coefficients/feature importances and write a short summary of what they mean. NOT DONE - COEFS PAIRED WITH THE FEATURE NAMES BUT NOT WRITTEN UP FOR LOG AND NOT DONE AT ALL FOR DECISION TREE # # Create a few new variables in your dataframe to represent interesting features of a job title (e.g. whether 'Senior' or 'Manager' is in the title). ONE HOT / COUNT VEC - DOUBLE CHECK THIS # # Incorporate other text features from the title or summary that you believe will predict the salary. NOT DONE # # Then build new classification models including also those features. Do they add any value? YES THEY ADD VALUE - DONE THE BUILDING OF THE MODELS, CHECK AGAIN WITH THE FEATURES # # Tune your models by testing parameter ranges, regularization strengths, etc. Discuss how that affects your models. MODEL TUNING SET UP. STILL TESTING. # # Discuss model coefficients or feature importances as applicable. NOT DONE # ##### Grid Search on Gradient Boost gbc_pipe.get_params() gbc_pipe[''] params = {'gradientboostingclassifier__ccp_alpha': 0.0, 'gradientboostingclassifier__criterion': 'friedman_mse', 'gradientboostingclassifier__init': None, 'gradientboostingclassifier__learning_rate': 0.1, 'gradientboostingclassifier__loss': 'deviance', 'gradientboostingclassifier__max_depth': 45, 'gradientboostingclassifier__max_features': None, 'gradientboostingclassifier__max_leaf_nodes': None, 'gradientboostingclassifier__min_impurity_decrease': 0.0, 'gradientboostingclassifier__min_impurity_split': None, 'gradientboostingclassifier__min_samples_leaf': 1, 'gradientboostingclassifier__min_samples_split': 2, 'gradientboostingclassifier__min_weight_fraction_leaf': 0.0, 'gradientboostingclassifier__n_estimators': 500, 'gradientboostingclassifier__n_iter_no_change': None, 'gradientboostingclassifier__presort': 'deprecated', 'gradientboostingclassifier__random_state': None, 'gradientboostingclassifier__subsample': 1.0, 'gradientboostingclassifier__tol': 0.0001, 'gradientboostingclassifier__validation_fraction': 0.1, 'gradientboostingclassifier__verbose': 0, 'gradientboostingclassifier__warm_start': False} gs_gbc = GridSearchCV(rfc_pipe) # + [markdown] focus=false id="4fb29de2-5b98-474c-a4ad-5170b72b9aea" # ### Model evaluation: # # Your boss would rather tell a client incorrectly that they would get a lower salary job than tell a client incorrectly that they would get a high salary job. Adjust one of your models to ease his mind, and explain what it is doing and any tradeoffs. # # # - Use cross-validation to evaluate your models. # - Evaluate the accuracy, AUC, precision and recall of the models. # - Plot the ROC and precision-recall curves for at least one of your models. # + ## YOUR CODE HERE from sklearn.model_selection import cross_val_score from sklearn.metrics import confusion_matrix, classification_report, accuracy_score, recall_score, precision_score, f1_score # - cross_val_score(pipe, X_train, y_train, cv=5) _.mean() y_pred_test = pipe.predict(X_test) confusion_matrix(y_test, y_pred_test) classification_report(y_test, y_pred_test) tn, fp, fn, tp = confusion_matrix(y_test, y_pred_test).ravel() # + def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout() # Compute confusion matrix cnf_matrix = confusion_matrix(y_test, y_pred) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['Forged','Authorized'], title='Confusion matrix, without normalization') # + [markdown] focus=false id="4fb29de2-5b98-474c-a4ad-5170b72b9aea" # <img src="http://imgur.com/xDpSobf.png" style="float: left; margin: 25px 15px 0px 0px; height: 25px"> # # ### Bonus: # # - Answer the salary discussion by using your model to explain the tradeoffs between detecting high vs low salary positions. # - Discuss the differences and explain when you want a high-recall or a high-precision model in this scenario. # - Obtain the ROC/precision-recall curves for the different models you studied (at least the tuned model of each category) and compare. # + focus=false id="068dc1cf-7fd7-4f27-a1f1-7f0a5a221d29" ## YOUR CODE HERE # - # Within the classes predicted by the model (High, Low), there are large disparities in responsibility within the industry landscape. It is key to understand the responsibility-reward balance when it comes the jobsearch, and although the model will help identify those salaries that are above market median, it is not to say that it is not far more important to understand the full spectrum of features that contribute to whether a job is the right job. # ### Summarize your results in an executive summary written for a non-technical audience. # # - Writeups should be at least 500-1000 words, defining any technical terms, explaining your approach, as well as any risks and limitations. # ##### YOUR TEXT HERE IN MARKDOWN FORMAT # # From the data that has been collected, only the posts detailing salary estimations have been used, with those that have provided ranges being set to mid-range values. # # In searching for a job, the cornerstone factors with compatibility are title, location, salary and company. # # For the purpose of preparing the data, any incomplete records have been dropped from the dataset. For data preprocessing, Natural Language processing has been employed using the CountVectorizer to identify keywords that associate with particular salary regions. # <img src="http://imgur.com/xDpSobf.png" style="float: left; margin: 25px 15px 0px 0px; height: 25px"> # # ### BONUS # # Convert your executive summary into a public blog post of at least 500 words, in which you document your approach in a tutorial for other aspiring data scientists. Link to this in your notebook. # + ## YOUR LINK HERE IN MARKDOWN FORMAT
40,147
/CNN_for_counting_people/People_counter_with_baseline_CNN.ipynb
1744d14292cd05eafbe029e3b7bb7b67a5aaf173
[ "MIT" ]
permissive
jihyeheo/People_Countiong_Object_Detection
https://github.com/jihyeheo/People_Countiong_Object_Detection
1
0
null
null
null
null
Jupyter Notebook
false
false
.py
337,077
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # People counter with baseline CNN # - We train a Convolutional neural network with a soft-max output layer comprising 5 output nodes. # ## 0. Import Packages # + from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import sys import tarfile import pandas as pd from six.moves import urllib from glob import glob import random import shutil from PIL import Image from sklearn.model_selection import train_test_split import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers.convolutional import Conv2D, MaxPooling2D from keras.models import load_model import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import numpy as np import matplotlib import matplotlib.pyplot as plt from sklearn.preprocessing import OneHotEncoder from tensorflow.python.framework import ops # - # # 1. Make Dataset # + dir = "X_data/" listdir = os.listdir(dir) print(listdir[:5]) print("The number of dataset :", len(listdir)) # + x_orig = [] for i in range(1,len(listdir) + 1): img = Image.open('X_data/' + str(i) + '.jpg') img_resize = img.resize((128, 72)) data = np.array(img_resize) x_orig.append(data) x_orig = np.array(x_orig) # - y_orig = [] label = pd.read_csv('y_label.txt', encoding = 'cp949', header = None) y_orig = np.array(label) print("Shape of x_orig :", x_orig.shape) print("Shape of y_orig :", y_orig.shape) # plot image grid fig = plt.figure(figsize=(8, 8)) for x in range(4): for y in range(4): ax = fig.add_subplot(4, 4, 4*y+x+1) plt.imshow(x_orig[4*y+x]) plt.xticks(np.array([])) plt.yticks(np.array([])) plt.show() # + # Random shuffle s = np.arange(x_orig.shape[0]) np.random.shuffle(s) x_shuffle = x_orig[s,:] y_shuffle = y_orig[s,:] print(x_shuffle.shape) print(y_shuffle.shape) # - x_train_orig, x_test_orig, y_train_orig, y_test_orig = train_test_split(x_shuffle, y_shuffle, test_size=0.2, shuffle = True, random_state=1004) # + # Normalize image vectors x_train = x_train_orig/255. x_test = x_test_orig/255. # Convert train and test labels to one hot matrices enc = OneHotEncoder() y1 = y_train_orig.reshape(-1,1) enc.fit(y1) y_train = enc.transform(y1).toarray() y2 = y_test_orig.reshape(-1,1) enc.fit(y2) y_test = enc.transform(y2).toarray() # Explore your dataset print ("number of training examples = " + str(x_train.shape[0])) print ("number of test examples = " + str(x_test.shape[0])) print ("x_train shape: " + str(x_train.shape)) print ("y_train shape: " + str(y_train.shape)) print ("x_test shape: " + str(x_test.shape)) print ("y_test shape: " + str(y_test.shape)) # - # ## 2. Baseline CNN # params batch_size = 1 num_classes = 11 epochs = 100 # + model = Sequential() model.add(Conv2D(64, kernel_size=(5, 5), strides=(2, 2), padding='same', activation='relu', input_shape=(72, 128, 3))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.summary() # - model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) # %%time hist100 = model.fit(x_train, y_train, batch_size = 517//batch_size, epochs = epochs, verbose = 1, validation_data = (x_test, y_test)) # ## 3. Results def plot_accuracy_and_loss(history): plt.figure(1, figsize= (15, 10)) # plot train and test accuracy plt.subplot(221) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Baseline CNN Accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') # plot train and test loss plt.subplot(222) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Baseline CNN loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper right') plt.show() plot_accuracy_and_loss(hist100) train_score = model.evaluate(x_train, y_train, verbose=0) test_score = model.evaluate(x_test, y_test, verbose=0) print('Train accuracy:', train_score[1]) print('Test accuracy:', test_score[1]) train_result_orig = model.predict(x_train) test_result_orig = model.predict(x_test) print(train_result_orig[:5]) train_result = np.argmax(train_result_orig, axis = 1) test_result = np.argmax(test_result_orig, axis = 1) print(train_result) print(test_result) # ## 4. Save and Load model # + # save model architecture model_json = model.to_json() open('model_100iter.json', 'w').write(model_json) # save model's learned weights model.save_weights('weights_100iter.h5', overwrite=True) # + # Load trained model from keras.models import model_from_json json_file = open("model_100iter.json", "r") loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) # model weight load loaded_model.load_weights("weights_100iter.h5") print("Loaded model from disk")
5,638
/DataVisualisation/Plotly/.ipynb_checkpoints/Plotly_Tutorial-checkpoint.ipynb
f14991ba05266bbb9e5d26faecb39fd885fbe737
[]
no_license
nrglll/DataScienceNotes
https://github.com/nrglll/DataScienceNotes
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
1,650
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/sahanal2603/1BM18CS089/blob/master/Introduction_to_statistical_averages.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="C0rvM2mXZ_Kk" import numpy as np # + colab={"base_uri": "https://localhost:8080/"} id="TURzBSmvanAu" outputId="64ae6dcb-e003-4762-eadb-fbf7438aef2f" x=np.array([10, 15, 20, 25, 30]) N=len(x) print('length of the array = ', N) print('x[1] = ',x[1]) # + colab={"base_uri": "https://localhost:8080/"} id="sxIytWwBbo84" outputId="5d884e31-2ef1-4ed8-cb7c-292f68a24dcb" N = 5 for i in range(N): print(2*i) print('Within the loop') # + id="VipCdah4aE2m" # Example 1 # Consider the data as CIE of students data=np.array([20, 25, 40, 31, 35, 27, 29, 26, 38, 39]) N=len(data) # + colab={"base_uri": "https://localhost:8080/"} id="UyCXfQHhbBWL" outputId="6fce260a-ac53-41d3-fc6a-a543859cd8b1" # Example 1a #To obtain the average of a given sequence data=np.array([20, 25, 40, 31, 35, 27, 29, 26, 38, 39]) N=len(data) sum=0 for i in range(N): sum = sum + data[i] #print(i, sum) our_avg=sum/N print('\n Our average = ', our_avg) print('\n Average from built in function = ',np.mean(data)) # + colab={"base_uri": "https://localhost:8080/"} id="1DomJmcidJnO" outputId="e4455e27-a4aa-4ed4-ad9d-f71bd8b9c635" # Example 1b # To obtain the variance of the given sequence # Variance is given by the average of ((x-mean)^2) sum=0 for i in range(N): sum = sum + (data[i]-our_avg)**2 our_variance=sum/N print('\n Our Variance = ', our_variance) print('\n Variance from built in function = ', np.var(data)) # + colab={"base_uri": "https://localhost:8080/"} id="yPaChAWpd65Q" outputId="79e5a874-54a0-49d5-b385-314e18b0976d" # Example 1c # To obtain the standard deviation of the given sequence # Standard deviation is given by the square root of Variance our_std = np.sqrt(our_variance) print(' Our Standard Deviation = ', our_std) print('\n Average from built in function = ', np.std(data)) # + colab={"base_uri": "https://localhost:8080/"} id="xKrj89gge7UO" outputId="624cb7e3-8c6f-4bcd-db45-763d20c1b751" # Example 1d # Relating the mean, variance and mean of x^2 sum_x=0 sum_x_square=0 for i in range(N): sum_x = sum_x + data[i] sum_x_square = sum_x_square + (data[i])**2 mean_x=sum_x/N mean_x_square=sum_x_square/N the_diff= mean_x_square - (mean_x)**2 print('The difference = ', the_diff ) print('The variance of the data = ', np.var(data)) print('\n Hence, variance = E(x^2) - E(x)^2') # + colab={"base_uri": "https://localhost:8080/"} id="5VrRrJbZreMJ" outputId="f5dcd5d7-c855-43eb-ceb3-73fcaa039948" # Example 1e # To obtain the standard deviation of the given sequence # Variance is given by the Square_root of the variance sample1= np.random.choice(data,4) print('Random selection of samples = ', sample1) print('\n Variance of the sample = ', np.var(sample1)) print('\n Recall, that the variance of the whole data = ', np.var(data)) print('\n Variance of the sample with Bessel correction = ', np.var(sample1)*N/(N-1)) # + colab={"base_uri": "https://localhost:8080/"} id="G0HHuZCnrmAe" outputId="c838a035-a373-47d2-dd01-6773facf237b" # Example 1f mew=np.mean(data) new_data=data - mew new_mean=np.mean(new_data) print('new mean =', new_mean) print('Hence, when we subtract mean from every element, mean of new distribution = 0') # + colab={"base_uri": "https://localhost:8080/"} id="fC6Kx4R52Ar1" outputId="fcd82bce-8c58-4ad6-c61f-3d7d3a915033" # Example 1g data=np.array([20, 25, 40, 31, 35, 27, 29, 26, 38, 39 ]) N=len(data) mew=np.mean(data) sigma=np.std(data) new_data=(data - mew)/sigma new_mean=np.mean(new_data) new_sigma=np.std(new_data) print('\n new mean =', new_mean) print('\n new standard Deviation = ', new_sigma) print('\n Hence, when we subtract mean from every element and divide by Standard deviation') print('\n the new distribution has Zero mean and Unit Standard deviation') # + colab={"base_uri": "https://localhost:8080/", "height": 350} id="IQwMb3aE2L2n" outputId="ec7f9f29-9114-4a86-98d3-a15e82a74ef9" # Example 2 # Marks scored in CIE, this is our input variable # Marks scored in SEE, this is our output variable CIE=np.array([20, 25, 40, 31, 35, 27, 29, 26, 38, 39]) SEE=np.array([50, 60, 75, 72, 80, 55, 68, 60, 75, 95]) # let us observe the data plt.scatter(CIE,SEE, marker='x') plt.axis([15, 45, 0, 100]) plt.grid() plt.xlabel('CIE') plt.ylabel('SEE') # to compute the Covariance, Correlation N=len(CIE) mew_CIE = np.mean(CIE) mew_SEE = np.mean(SEE) sigma_CIE = np.std(CIE) sigma_SEE = np.std(SEE) mew_CIE_SEE = np.dot(CIE,SEE)/N covariance = (mew_CIE_SEE) - (mew_CIE)*(mew_SEE) correlation = covariance /(sigma_CIE * sigma_SEE) print('\n Covariance of CIE and SEE = ',covariance ) print('\n Correlation of CIE and SEE = ',correlation ) # + colab={"base_uri": "https://localhost:8080/", "height": 350} id="n5D0jvY-2bdS" outputId="2faaf503-4a27-40b1-90ae-e698d478ffe8" # Example 3 # Hours spent on the mobile phone per day, this is our input variable # Marks scored in SEE, this is our output variable CIE=np.array([3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9]) SEE=np.array([95, 60, 85, 80, 75, 55, 65, 60, 55, 50, 45, 40, 5,85 ]) # let us observe the data plt.scatter(CIE,SEE, marker='x') plt.axis([0, 10, 0, 100]) plt.grid() plt.xlabel('CIE') plt.ylabel('SEE') # to compute the Covariance, Correlation N=len(CIE) mew_CIE = np.mean(CIE) mew_SEE = np.mean(SEE) sigma_CIE = np.std(CIE) sigma_SEE = np.std(SEE) mew_CIE_SEE = np.dot(CIE,SEE)/N covariance = (mew_CIE_SEE) - (mew_CIE)*(mew_SEE) correlation = covariance /(sigma_CIE * sigma_SEE) print('\n Covariance of CIE and SEE = ',covariance ) print('\n Correlation of CIE and SEE = ',correlation ) # + colab={"base_uri": "https://localhost:8080/"} id="_CO2Lo5A2kYR" outputId="434e50e1-f239-4221-e5d5-bc079013cf52" # Example 4 # To print the letter Grade based on Score, Absolute Grading System # S if greater than 90 # A if greater than 75 # B if greater than 60 # C if greater than 40 # F is less than 40 score = 92 if score >= 90: grade = 'S' if score >= 75 and score <90: grade = 'A' if score >= 60 and score < 75: grade = 'B' if score >= 40 and score <60: grade = 'C' if score < 40: grade = 'F' print('Grade is = ', grade) # + colab={"base_uri": "https://localhost:8080/"} id="ntwyxH1b2vvT" outputId="e2ec2b81-ebc7-4a64-a7bc-acecdbc22f79" # Example 5 # To print the letter Grade based on Score, and class performance # Relgative Grading System # S if greater than mew+1.5sigma # A if greater than mew+0.5sigma # B if greater than mew-0.5sigma # C if greater than mew-1.5sigma # F if less than mew-1.5sigma #SEE=np.array([95, 60, 85, 80,75, 55, 65, 60, 55, 50, 45,40,35, 85 ]) SEE=np.array([75, 40, 65, 60,55, 35, 45, 40, 35, 30, 25, 20,15, 65 ]) mew=np.mean(SEE) sigma=np.std(SEE) score = 75 if score >= (mew + 1.5*sigma): grade = 'S' if score >= (mew + 0.5*sigma) and score < (mew + 1.5*sigma): grade = 'A' if score >= (mew - 0.5*sigma) and score < (mew + 0.5*sigma): grade = 'B' if score >= (mew - 1.5*sigma) and score <(mew - 0.5*sigma): grade = 'C' if score < (mew - 1.5*sigma): grade = 'F' print('\n Class average = ', mew ) print('\n Class Standard Deviation = ', sigma) print('\n Your Score = ', score) print('\n hence Grade is = ', grade) print('\n',round(mew+1.5*sigma), round(mew+0.5*sigma), round(mew-0.5*sigma), round(mew-1.5*sigma) )
7,780
/interactive_eos_symmetric_mixture.ipynb
69f8b790b610b05c9e24096fd3a9b9d2bab0b8ff
[]
no_license
viktorcikojevic/Universality-in-Ultradilute-Symmetric-Liquid-Bose-Bose-Mixtures
https://github.com/viktorcikojevic/Universality-in-Ultradilute-Symmetric-Liquid-Bose-Bose-Mixtures
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
43,756
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Saving and Loading Models # # In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data. # + # %matplotlib inline # %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms import helper import fc_model # + # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) # - # Here we can see one of the images. image, label = next(iter(trainloader)) helper.imshow(image[0,:]); # # Train a network # # To make things more concise here, I moved the model architecture and training code from the last part to a file called `fc_model`. Importing this, we can easily create a fully-connected network with `fc_model.Network`, and train the network using `fc_model.train`. I'll use this model (once it's trained) to demonstrate how we can save and load models. # + # Create the network, define the criterion and optimizer model = fc_model.Network(784, 10, [512, 256, 128]) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) # - fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2) # ## Saving and loading networks # # As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions. # # The parameters for PyTorch networks are stored in a model's `state_dict`. We can see the state dict contains the weight and bias matrices for each of our layers. print("Our model: \n\n", model, '\n') print("The state dict keys: \n\n", model.state_dict().keys()) # The simplest thing to do is simply save the state dict with `torch.save`. For example, we can save it to a file `'checkpoint.pth'`. torch.save(model.state_dict(), 'checkpoint.pth') # Then we can load the state dict with `torch.load`. state_dict = torch.load('checkpoint.pth') print(state_dict.keys()) # And to load the state dict in to the network, you do `model.load_state_dict(state_dict)`. model.load_state_dict(state_dict) # Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails. # Try this model = fc_model.Network(784, 10, [400, 200, 100]) # This will throw an error because the tensor sizes are wrong! model.load_state_dict(state_dict) # This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model. # + checkpoint = {'input_size': 784, 'output_size': 10, 'hidden_layers': [each.out_features for each in model.hidden_layers], 'state_dict': model.state_dict()} torch.save(checkpoint, 'checkpoint.pth') # - # Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints. def load_checkpoint(filepath): checkpoint = torch.load(filepath) model = fc_model.Network(checkpoint['input_size'], checkpoint['output_size'], checkpoint['hidden_layers']) model.load_state_dict(checkpoint['state_dict']) return model model = load_checkpoint('checkpoint.pth') print(model) .reshape(-1, 1) @ xB.reshape(1, -1) /(xB @ x_k) U_x_k_b = null_space(xB) U_x_k = null_space(x_k.reshape(1, -1)) H_p = U_x_k_b.T @ PiB @ H @ U_x_k # fix eigenvector y = U_x_k @ solve(H_p, -U_x_k_b.T @ PiB @ g) if by_line: print('y=%s' % str(y)) x_k_n = (x_k + y)/(np.linalg.norm(x_k + y)) # update residual and lbd R = norm(x_k-x_k_n) x_k = x_k_n T_x_m_2 = symmetric_tv_mode_product(T, x_k, m-2) T_x_m_1 = T_x_m_2 @ x_k lbd = (pw(x_k.T, a) @ AA @ T_x_m_1) / (pw(x_k.T, a) @ AA @ x_k) # print('ctr=%d lbd=%f' % (ctr, lbd)) ctr += 1 x = x_k err = norm(symmetric_tv_mode_product( T, x, m-1) - lbd * x) if ctr < max_itr: converge = True return x, lbd, ctr, converge, err # + [markdown] id="XG-VA1x95TRt" colab_type="text" # We see the the separate choices of left inverse still give a fast convergence algorithm: # + id="89pJQMkm5Ohi" colab_type="code" outputId="472f2448-c148-4d9b-e848-163351320199" colab={"base_uri": "https://localhost:8080/", "height": 357} n = 10 m = 3 tol = 1e-10 max_itr = 200 np.random.seed(0) n_test = 10 for i in range(n_test): a = np.random.randint(0, 5) b = np.random.randint(0, 5) x_init = np.random.randn(n) x_init /= np.linalg.norm(x_init) BB = utils.gen_random_symmetric_pos(n) AA = utils.gen_random_symmetric_pos(n) T = utils.generate_symmetric_tensor(n, m) x, lbd, ctr, converge, err = ortho_sphere_power( T, max_itr, tol, x_init, a=a, b=b, AA=AA, BB=BB) print('x=%s, lbd=%f, ctr=%d, converge=%d, err=%f' % ( str(x), lbd, ctr, converge, err)) # + [markdown] id="Q0BjqzJUDhNO" colab_type="text" # ## **We note the the left inverse for the projection does not have an effect on convergence - all changes in xB give the same answer as the Schur form. There is a strong effect if we change A, the left inverse related to lambda.** # + id="zVxCY9qQdfvP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="738f4433-3876-4f08-baa5-6766ad5cd04c" n = 15 m = 3 tol = 1e-10 max_itr = 200 np.random.seed(0) n_test = 5 x_init = np.random.randn(n) x_init /= np.linalg.norm(x_init) a = 1 AA = np.eye(n) # a = np.random.randint(0, 5) # AA = utils.gen_random_symmetric_pos(n) T = utils.generate_symmetric_tensor(n, m) for i in range(n_test): b = np.random.randint(0, 7) BB = utils.gen_random_symmetric_pos(n) # BB = np.eye(n) # by_line will print the iteration line by line x, lbd, ctr, converge, err = ortho_sphere_power( T, max_itr, tol, x_init, a=a, b=b, AA=AA, BB=BB, by_line=False) print('b=%d x=%s, lbd=%f, ctr=%d, converge=%d, err=%f' % ( b, str(x), lbd, ctr, converge, err)) # + id="4XoyIIYWehto" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="b06bc855-558d-44a0-d92c-5f87c1a17e3b" # n = 15 # m = 3 tol = 1e-10 max_itr = 200 np.random.seed(0) n_test = 5 # x_init = np.random.randn(n) # x_init /= np.linalg.norm(x_init) # b = np.random.randint(0, 5) # BB = utils.gen_random_symmetric_pos(n) b = 1 BB = np.eye(n) # T = utils.generate_symmetric_tensor(n, m) for i in range(n_test): a = np.random.randint(0, 7) AA = utils.gen_random_symmetric_pos(n) # AA = np.eye(n) x, lbd, ctr, converge, err = ortho_sphere_power( T, max_itr, tol, x_init, a=a, b=b, AA=AA, BB=BB, by_line=False) print('a=%d x=%s, lbd=%f, ctr=%d, converge=%d, err=%f' % ( a, str(x), lbd, ctr, converge, err))
8,339
/FirstTensorFlow.ipynb
e0e0477ddfc0c162267465c1b263ba955de9d8a1
[]
no_license
Anuj-Kumar-QA/TensorFlow
https://github.com/Anuj-Kumar-QA/TensorFlow
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
4,431
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # https://github.com/pankymathur/simple-linear-regression-using-python-only # 학습 알고리즘의 일종 def step_gradient(bias_current, param_current, points, learningRate, iteration): bias_gradient = 0 param_gradient = 0 N = float(len(points)) for i in range(len(points)): x = points[i,0] # i 번째 행, 0번째 열 y = points[i,1] # i 번째 행, 1번째 열 bias_gradient += -(2/N)*(y-((param_current*x)+bias_current)) param_gradient += -(2/N)*x*(y-((param_current*x)+bias_current)) new_bias = bias_current - (learningRate * bias_gradient) new_param = param_current - (learningRate * bias_gradient) return [new_bias, new_param] def gradient_descent_runner(points, starting_bias, starting_param, learning_rate, num_iterations): bias_1 = starting_bias param_1 = starting_param for i in range(num_iterations): bias_1, param_1 = step_gradient(bias_1, param_1, array(points), learning_rate, i) return [bias_1, param_1] def compute_error_for_line_given_points(bias, param, points): totalError = 0 error = 0 for i in range(0, len(points)): x = points[i,0] y = points[i,1] error = ((y-(param*x + bias)) **2)/len(points) totalError += error / len(points) print("At Row {0}, using b = {1} and m = {2}, Error = {3}".format(i, bias, param, error)) print("\n Total Error is: {0}".format(totalError)) return error, totalError import pandas as pd # pandas: 금융 관련 라이브러리 # 여기서는 데이터셋의 모습을 간략하게 살펴보기 위해 사용 dataset = pd.read_csv("diabetes.csv") dataset # 1열: BMI # 2열: 혈당(blood sugar) # + import matplotlib.pyplot as plt import matplotlib.cbook as cbook # matplotlib는 plot을 위한 라이브러리 import numpy as np # 행렬 연산에 특화된 라이브러리 # - def graph(): dv = np.loadtxt('diabetes.csv', delimiter=',') #print(dv) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) x = dv[:,0] y = dv[:,1] ax.scatter(x,y) plt.title('diabetes') plt.xlabel('BMI') plt.ylabel('Blood Sugar') plt.show() graph() # + def pankax(): # dataset: diabetes(당뇨병).csv # points = genfromtxt("diabetes.csv", delimiter=",") learning_rate = 0.001 initial_bias = 1 initial_param = 1 num_iterations = 100 print("\n First compute Error for each row by using equation y_predicted = param*x + bias and eror = (ground truth - predicted value)^2 / len(points) by using random b = {0}, and p = {1} \n".format(initial_bias, initial_param)) compute_error_for_line_given_points(initial_bias, initial_param, points) print("\n Now, let's run gradient_descent_runner to get new param and bias with learning rate of {1} and {0} iterations \n".format(num_iterations, learning_rate)) [b,p] = gradient_descent_runner(points, initial_bias, initial_param, learning_rate, num_iterations) print(b, p) compute_error_for_line_given_points(b, p, points) print("\n After {0}nd iterations final b = {1}, p = {2} \n".format(num_iterations,b,p)) print("\n Enter BMI to get blood Sugar \n") X_test = 27.2 print("\n Test/Sample BMI is: {0}\n".format(X_test)) y_test = p*X_test + b print("\n Blood sugar is {0} \n".format(y_test)) if __name__ == '__main__': pankax()
3,598
/chapter07/.ipynb_checkpoints/action-week07-checkpoint.ipynb
cea9c3a34af99b297625a5c6e4a1aa39e637a1d8
[]
no_license
guobing21/BI_senior
https://github.com/guobing21/BI_senior
4
1
null
null
null
null
Jupyter Notebook
false
false
.py
26,842
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/YMGYM/TSE_Learning/blob/master/Beijing_air_plollution_2(final).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="hPythiWfQCnX" colab_type="text" # # Introduction # # 이 파일은 황철현, 신강욱의 # `미세먼지 예측 성능 개선을 위한 CNN-LSTM 결합 방법` # 논문의 구현 연습 파일입니다. # # 데이터셋은 [Beijing PM2.5 데이터셋](https://www.kaggle.com/djhavera/beijing-pm25-data-data-set) # 을 사용했습니다. # + [markdown] id="hMnquIS4QfaO" colab_type="text" # # Import All # + id="SJ9ZCMavQg_n" colab_type="code" colab={} import pandas as pd import numpy as np import tensorflow as tf import tensorflow.keras as K import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler # + [markdown] id="miGdslSrQXBM" colab_type="text" # # Load Data # + id="8xY3TOYzQbgv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="0b05c48a-f19a-420f-e95a-6f014c464078" # ! unzip /content/drive/My\ Drive/Datasets/beijing_air.zip -d data # + id="6R5W7oRhQ6Ow" colab_type="code" colab={} def get_data(): all_data = pd.read_csv('/content/data/PRSA_data_2010.1.1-2014.12.31.csv') # 전체 데이터 dropped_data = all_data.drop(['No', 'year', 'month', 'day', 'hour'],axis=1) # 필요 없는 데이터는 버림 pm25 = dropped_data.pop('pm2.5') # 미세먼지 데이터 확인 pm25 = pm25.fillna(method='pad') return pm25, dropped_data # + [markdown] id="I5dnAMuOR1N0" colab_type="text" # # Make Normalize Dataset # + id="UhbeIxNDR1hX" colab_type="code" colab={} class PmScaler(): def __init__(self, pm25): self.scaler = MinMaxScaler() self.pm25 = pm25.fillna(1e-8, limit=1) self.pm25 = self.pm25.fillna(method="pad") def make_norlized_dataset(self, x = None, data_len=2): if x is None: x = self.pm25[data_len-1:] if isinstance(x, type(np.array([]))) == False: reshaped = x.to_numpy().reshape(-1,1) else: reshaped = x.reshape(-1,1) self.scaled_data = self.scaler.fit_transform(reshaped) return self.scaled_data def slice_data(self, rate = 0.1, x = None): if x is None: x = self.scaled_data arrlen = int(len(x) * (rate) + 1) train, val, test = x[:-1 * (arrlen * 2)], x[-1 * (arrlen * 2) : -1 * (arrlen)], x[-1 * (arrlen):] return train[1:], val[1:], test[1:] def invert_scale(self, x): inverse = self.scaler.inverse_transform(x) return inverse # + id="Ri1ktmAkpauR" colab_type="code" colab={} class ProxyDataScaler(): def __init__(self, data): self.table = data self.scaler = MinMaxScaler() def change_cbwd_data(self): mapping = {} cols = self.table["cbwd"].value_counts().index for i, col in enumerate(cols): mapping[col] = i # mapping = {"SE" : 0, "NW": 1, "cv": 2, "NE":3} self.table = self.table.replace({'cbwd' : mapping}) print("cbwd data changed to number : {SE : 0, NW: 1, cv: 2, NE:3} ") def make_normalize_data(self): self.norm_data = self.scaler.fit_transform(self.table) return self.norm_data def slice_proxy_data(self, time): col_cnt = len(self.table.columns) if (self.norm_data is not None): # 정규화된 데이터가 있는지 확인 print("norm_data detected") if isinstance(self.norm_data, type(np.array([]))) == False: # 데이터를 numpy 형식으로 변환 data = self.norm_data.to_numpy().astype("float32") else: data = self.norm_data.astype("float32") else: print("norm_data not detected") data = self.table.to_numpy().astype("float32") self.sliced_data = np.zeros(shape=(1,time,col_cnt)) for i in range((len(data)-time) + 1): if i == 0: self.sliced_data = data[:i+time].reshape(1, time,-1) else: self.sliced_data = np.vstack((self.sliced_data, data[i:i+time].reshape(1,time,-1))) return self.sliced_data def split_data(self, data = None, rate = 0.1): if data is None: arrlen = int(len(self.sliced_data) * (rate) + 1) data = self.sliced_data else: arrlen = int(len(data) * rate + 1) data = data train, val, test = data[:-1 * (arrlen * 2)], data[-1 * (arrlen * 2) : -1 * (arrlen)], data[-1 * (arrlen):] return train, val, test # + [markdown] id="COffyY3ow6By" colab_type="text" # # New LSTM Data generator # + id="weEk_Ih-w89V" colab_type="code" colab={} class LSTMInputGenerator(K.utils.Sequence): def __init__(self, lstm_x, lstm_y, data_len, cnn_output, batch_size=1): self.lstm_data_gen = K.preprocessing.sequence.TimeseriesGenerator(lstm_x, lstm_x, batch_size=batch_size, length=data_len, shuffle=False) self.cnn_output = cnn_output[data_len-1:] print(f"lstm_data_gen len : {self.lstm_data_gen.__len__()}, cnn_output len : {len(self.cnn_output)}") self.batch_size = batch_size def __getitem__(self, index): lstm_x, lstm_y = self.lstm_data_gen[index] stack_data = self.cnn_output[index*self.batch_size : (index + 1)*self.batch_size].reshape(-1,1,1) # print(f"index : {index} / lstm_x shape : {lstm_x.shape} / stack_data.shape: {stack_data.shape}") return_x = np.concatenate((lstm_x, stack_data),axis=1) return return_x, lstm_y def __len__(self): return self.lstm_data_gen.__len__() # + [markdown] id="1iO5V6OhRZNJ" colab_type="text" # # Entire Model # + id="arSPbMdqRepE" colab_type="code" colab={} class EntireModel(): def __init__(self, pm25, proxydata): # -----------scaler 클래스 생성 ------------------- self.pm25 = pm25.fillna(1e-8, limit=1) self.pm25 = self.pm25.fillna(method="pad") print(f"pm25 length : {len(self.pm25)}" ) self.proxydata = proxydata print(f"proxydata length : {len(self.proxydata)}" ) self.pmScaler = PmScaler(pm25) self.proxyScaler = ProxyDataScaler(proxydata) # ---------- train 용 callbacks ----------- self.lstm_callbacks = [K.callbacks.TensorBoard(log_dir='lstm_logs')] self.cnn_callbacks = [K.callbacks.TensorBoard(log_dir='cnn_logs')] def make_proxy_data_generator(self, data_len = 2): # ---------- proxy 데이터 전처리 ----------- self.proxyScaler.change_cbwd_data() # 문자 데이터를 숫자 범주로 norm_data = self.proxyScaler.make_normalize_data() # 데이터 정규화 self.proxyScaler.slice_proxy_data(data_len) # 데이터 길이별로 자름 print(f"sliced_data length : {len(self.proxyScaler.sliced_data)}// shape : {self.proxyScaler.sliced_data.shape} // data_len: {data_len}" ) cnn_x_train, cnn_x_val, cnn_x_test = self.proxyScaler.split_data( rate = 0.1) # x 데이터 분할 print(f"cnn_x_train : {len(cnn_x_train)}, {cnn_x_train.shape}, cnn_x_val : {len(cnn_x_val)}, cnn_x_test length: {len(cnn_x_test)} // data_len: {data_len}" ) self.grad_level = self._get_grad_pm25() # 미세먼지 변화량을 구함 print(f"grad_level length : {len(self.grad_level)} // data_len: {data_len}" ) cnn_y_train, cnn_y_val, cnn_y_test = self._cnn_y_split(self.grad_level, data_len) # y 데이터 분할 print(f"cnn_y_train : {len(cnn_y_train)}, cnn_y_val : {len(cnn_y_val)}, cnn_x_test length: {len(cnn_y_test)} // data_len: {data_len}" ) # ---------- Data Generator 생성 --------- self.cnn_train_data_gen = K.preprocessing.sequence.TimeseriesGenerator(cnn_x_train, cnn_y_train, length=1, batch_size = 128, shuffle=False) self.cnn_val_data_gen = K.preprocessing.sequence.TimeseriesGenerator(cnn_x_val, cnn_y_val, length=1, batch_size = 128, shuffle=False) self.cnn_test_data_gen = K.preprocessing.sequence.TimeseriesGenerator(cnn_x_test, cnn_y_test, length=1, batch_size = 1, shuffle=False) print(f"cnn_train_data_gen length : {self.cnn_train_data_gen.__len__()} // data_len: {data_len}" ) # ----------- 모델 생성 -------------- print("Make CNN Model....") self.cnn_model = self._get_cnn_model(input_shape=(1, cnn_x_train.shape[1], cnn_x_train.shape[2])) def make_lstm_data_generator(self, data_len = 15, batch_size = 1): # ---------- pm25 데이터 전처리 ----------- self.pmScaler.make_norlized_dataset() pm25_x_train, pm25_x_val, pm25_x_test = self.pmScaler.slice_data(rate = 0.1) print(f"pm25_x_train length : {len(pm25_x_train)}" ) print(f"pm25_x_val length : {len(pm25_x_val)}" ) print(f"pm25_x_test length : {len(pm25_x_test)}" ) self.pm25_test = pm25_x_test # ---------- pm25 Data Generator ------------- self.pm25_train_data_gen = K.preprocessing.sequence.TimeseriesGenerator(pm25_x_train, pm25_x_train, batch_size=batch_size, length=1, shuffle=False) self.pm25_val_data_gen = K.preprocessing.sequence.TimeseriesGenerator(pm25_x_val, pm25_x_val, batch_size = batch_size, length=1, shuffle=False) self.pm25_test_data_gen = K.preprocessing.sequence.TimeseriesGenerator(pm25_x_test, pm25_x_test, batch_size = 1, length=1, shuffle=False) # ---------- CNN result ------------ print("get CNN output..") print("-----------get train output ---------") self.cnn_train_output = self.cnn_model.predict(self.cnn_train_data_gen) print(f"cnn_train_output length : {len(self.cnn_train_output)}" ) cnn_train_output_gen = K.preprocessing.sequence.TimeseriesGenerator(self.cnn_train_output, self.cnn_train_output, batch_size = batch_size, length=1, shuffle=False) rated_cnn_train_output = self._make_lstm_input(cnn_train_output_gen, self.pm25_train_data_gen) print(f"rated_cnn_train_output length : {len(rated_cnn_train_output)}" ) print("-----------get validation output ---------") self.cnn_val_output = self.cnn_model.predict(self.cnn_val_data_gen) print(f"cnn_val_output length : {len(self.cnn_val_output)}" ) cnn_val_output_gen = K.preprocessing.sequence.TimeseriesGenerator(self.cnn_val_output, self.cnn_val_output, batch_size = batch_size, length=1, shuffle=False) rated_cnn_val_output = self._make_lstm_input(cnn_val_output_gen, self.pm25_val_data_gen) print(f"rated_cnn_val_output length : {len(rated_cnn_val_output)}" ) print("-----------get test output ---------") self.cnn_test_output = self.cnn_model.predict(self.cnn_test_data_gen) print(f"cnn_test_output length : {len(self.cnn_test_output)}" ) cnn_test_output_gen = K.preprocessing.sequence.TimeseriesGenerator(self.cnn_test_output, self.cnn_test_output, batch_size = batch_size, length=1, shuffle=False) rated_cnn_test_output = self._make_lstm_input(cnn_test_output_gen, self.pm25_test_data_gen) # ---------- LSTM data gen --------- self.lstm_train_data_gen = LSTMInputGenerator(pm25_x_train, pm25_x_train, data_len, rated_cnn_train_output, batch_size = batch_size) self.lstm_val_data_gen = LSTMInputGenerator(pm25_x_val, pm25_x_val, data_len, rated_cnn_val_output, batch_size = batch_size) self.lstm_test_data_gen = LSTMInputGenerator(pm25_x_test, pm25_x_test, data_len, rated_cnn_test_output, batch_size = batch_size) # ---------- 모델 생성 print("make LSTM model...") self.lstm_model = self._get_lstm_model(input_shape=(data_len + 1,1)) def cnn_model_fit(self, epochs=1): self.cnn_model.fit(x=self.cnn_train_data_gen, epochs=epochs, validation_data=(self.cnn_val_data_gen), callbacks=self.cnn_callbacks) def lstm_model_fit(self, epochs=1): self.lstm_model.fit(x=self.lstm_train_data_gen, epochs=epochs, validation_data=self.lstm_val_data_gen, callbacks=self.lstm_callbacks) def total_model_evaluate(self): return self.lstm_model.evaluate(self.lstm_test_data_gen) def total_model_predict(self, data = None): if data is None: data = self.lstm_test_data_gen return self.lstm_model.predict(x=data) def predict(self, pm25, proxy): cnn_result = self.cnn_model.predict(x=proxy) lstm_result = self.lstm_model.predict(x=pm25) def load_lstm_model(self, path): print(f"Load model from {path}...") self.lstm_model = K.models.load_model(path) print("Done") def _get_cnn_model(self, input_shape): cnnModel = K.Sequential() cnnModel.add(K.layers.Conv2DTranspose(32, (2,2), input_shape=input_shape, activation="relu")) cnnModel.add(K.layers.MaxPool2D(strides=2)) cnnModel.add(K.layers.Flatten()) cnnModel.add(K.layers.Dropout(0.1)) cnnModel.add(K.layers.Dense(100, activation="relu")) cnnModel.add(K.layers.ReLU()) cnnModel.add(K.layers.Dense(5, activation="softmax")) cnnModel.summary() cnnModel.compile(optimizer="adam", loss="MSE") return cnnModel def _get_lstm_model(self, input_shape): lstm_model = K.Sequential() lstm_model.add(K.layers.LSTM(216, input_shape=input_shape)) lstm_model.add(K.layers.Dropout(0.3)) lstm_model.add(K.layers.Dense(128, activation="relu")) lstm_model.add(K.layers.Dropout(0.3)) lstm_model.add(K.layers.Dense(1, activation="sigmoid")) lstm_model.summary() lstm_model.compile(optimizer="adam", loss="MSE") return lstm_model def _cnn_y_split(self, data, data_length, rate=0.1): # cnn에 입력할 Y 데이터를 나눔 data = data[data_length-1:] data = K.utils.to_categorical(data) arrlen = int(len(data) * rate + 1) train, val, test = data[:-1 * (arrlen * 2)], data[-1 * (arrlen * 2) : -1 * (arrlen)], data[-1 * (arrlen):] return train, val, test def _get_grad_pm25(self): # pm25 의 변화율을 구하고 범주화함 grad_data = self.pm25.pct_change() grad_data = grad_data.fillna(method="pad") bins = [-9.166667e-02, -1e-15,1e-15, 1.212121e-01] # bins = [-10, -1e-15, 1e-15, 10] grad_level = np.digitize(grad_data, bins=bins, right=False) return grad_level def _get_rate(self, data): # 변화율 별로 pm25의 예측량을 구해봄(임시) index = data.argmax(axis=1) rate = np.array([]) for i in range(len(index)): if index[i] == 3: rate = np.append(rate, 1) else: rate = np.append(rate, 1 + ((index[i]-3) * 0.25)) # 최대 50%의 변화율을 줘 봄 return rate def _compute_with_data(self, data, value): # 예측량을 구함 rate = self._get_rate(data) return rate * value.squeeze(axis=1) def _make_lstm_input(self, cnn_output_gen, lstm_data_gen): # 기존 lstm_input 에 위에서 구한 변화율을 곱한 뒤에 쌓음. result = np.zeros(shape=(1,)) for i in range(cnn_output_gen.__len__()): lx, ly = lstm_data_gen[i] cx, cy = cnn_output_gen[i] # print(f"cx shape : {cx.shape}, lx shape : {lx.shape}") if i == 0: result = self._compute_with_data(cx.squeeze(axis=1), lx.squeeze(axis=1)) else: result = np.hstack((result, self._compute_with_data(cx.squeeze(axis=1), lx.squeeze(axis=1)))) return result # + [markdown] id="Da1JnS-Q3rl-" colab_type="text" # # Model Init # + id="OUOca31BqV8i" colab_type="code" colab={} pm25, proxy = get_data() # + id="05e59Gkp4BwK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="ac4308f7-d231-43b0-ffdc-c2db060f4474" model = EntireModel(pm25, proxy) # + id="u-eRVA1z4FPT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 527} outputId="0882be19-ae56-421d-b59b-8cff82a21483" model.make_proxy_data_generator(data_len = 2) # + id="WEkiK5cS6OYh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 480} outputId="da993aa6-4097-45e1-a21d-a96f9c6b1747" model.cnn_model_fit(epochs=40) # + id="m9OVQONU9Im3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 595} outputId="dbffd617-db2b-4277-d10e-8957ef29bf1b" model.make_lstm_data_generator(data_len = 15, batch_size=128) # + id="m9eL_aeiX1HD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="c8e5baeb-02bf-44b6-bea3-86563b1b9e74" model.load_lstm_model('/content/drive/My Drive/trained_model/AirPollution') # + id="ZwRdaKZiMc4e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="e9bfaf39-a84e-4f29-84f9-359c44c894c6" model.lstm_model_fit(epochs=100) # + id="OEDmXA7wuMH2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="bc86617b-8c87-4bd8-ac2f-f76d09cd74be" model.total_model_evaluate() # + id="yT-duD7y_N6s" colab_type="code" colab={} predict = model.total_model_predict() # + id="TWmnZ3MuQDAL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="5dc3bb81-9d3e-4fd4-db0a-5e743a5f0cc5" plt.plot(model.pm25_test, label="Actual") plt.plot(predict, label="Predicted") plt.legend() plt.title("Compare Predict With Actual Data") plt.show() # + id="uu9O30Aftg30" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 156} outputId="c95ad73e-b6a3-4c5d-d504-70cf1af82f5c" # model.lstm_model.save('/content/drive/My Drive/trained_model/AirPollution2') # + id="eUY5C5nfqKjZ" colab_type="code" colab={} sample_x = pm25[0] # + id="0DHlGRYEqXjw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b0fed3b1-63d5-4dea-c5e4-175ee4940f34" sample_x # + id="eN7hZIQdqaPO" colab_type="code" colab={}
17,037
/vdecision tree.ipynb
ebc451a98a8c8d55e5e161e9730c4d9a0ecd910e
[]
no_license
vvignesh17/Machine_Learning
https://github.com/vvignesh17/Machine_Learning
1
0
null
2019-03-24T09:39:03
2019-03-24T09:35:00
Jupyter Notebook
Jupyter Notebook
false
false
.py
60,111
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np from sklearn import tree df=pd.read_csv("Documents/PastHires.csv",header=0) df.head() # - #Assigning x and y df1={'Y':1,'N':0} df['Employed?']=df['Employed?'].map(df1) df['Top-tier school']=df['Top-tier school'].map(df1) df['Interned']=df['Interned'].map(df1) df['Hired']=df['Hired'].map(df1) df2={'BS':0,'MS':1,'PhD':2} df['Level of Education']=df['Level of Education'].map(df2) df feature=list(df.columns[:6]) feature #using decision tree on x ,y #formin decision tree having nodes increasing imformation gain or decreasing gini index y = df["Hired"] X = df[feature] clf = tree.DecisionTreeClassifier() clf = clf.fit(X,y) # + #graphical representation of decision tree from IPython.display import Image from sklearn.externals.six import StringIO import pydotplus dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data, feature_names=feature) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) # + from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100) clf = clf.fit(X, y) #Predict employment of an employed 10-year veteran print (clf.predict([[0, 0, 0, 1, 1, 0]])) #...and an unemployed 10-year veteran print (clf.predict([[0, 0, 0, 1, 1, 1]]))
1,592
/c3-decision-trees.ipynb
1fe05e267ea005731f35c1ae9511fe433c5f8367
[]
no_license
MofiiTech/machine-learning-in-action
https://github.com/MofiiTech/machine-learning-in-action
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
2,472
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="view-in-github" # [View in Colaboratory](https://colab.research.google.com/github/restrepo/WOSplus/blob/master/test_sample.ipynb) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="8h8QFxvObxdb" outputId="0ab5f50f-e79c-41c9-f2d5-04c86a9a76a7" language="bash" # # Check if colaboratory was launched and install missing requirements # if [ "$(pwd)" == /content ];then # pip install openpyxl xlrd unidecode python-levenshtein requests_testadapter > /dev/null # git clone https://github.com/restrepo/WOSplus.git > /dev/null # #mv WOSplus/wosplus/* . # fi # + colab={} colab_type="code" id="TF3UTQqjaLxa" import os if os.getcwd()== '/content': os.chdir('WOSplus') # + colab={} colab_type="code" id="uwbkDKBbbwvQ" import wosplus # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="s2kdcEq8R2Aq" outputId="fac30ce1-7798-428e-d524-ce3f6cb4b0e4" # %%writefile drive.cfg [FILES] Sample_WOS.xlsx = 1--LJZ4mYyQcaJ93xBdbnYj-ZzdjO2Wq2 Sample_SCI.xlsx = 1-3a-hguQTk5ko8JRLCx--EKaslxGVscf Sample_SCP.xlsx = 1-IAWlMdp2U-9L2jvZUio04ub1Ym3PX-H # + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="BUzIDf92bwvU" outputId="58d95832-5a8e-44b5-850a-274e57723961" cib=wosplus.wosplus('drive.cfg') cib.load_biblio('Sample_WOS.xlsx') cib.load_biblio('Sample_SCI.xlsx',prefix='SCI') cib.load_biblio('Sample_SCP.xlsx',prefix='SCP') print('before merge: {}'.format( cib.WOS.shape[0]+cib.SCI.shape[0]+cib.SCP.shape[0] ) ) cib.merge(left='WOS',right='SCI') if True: print('intial: {}'.format( cib.WOS.shape[0]+cib.SCI.shape[0]) ) print('final : {}'.format( cib.WOS_SCI.shape) ) cib.merge(left='WOS_SCI',right='SCP') if True: print('intial: {}'.format( cib.WOS_SCI.shape[0]+cib.SCP.shape[0]) ) print('final : {}'.format( cib.WOS_SCI_SCP.shape) ) # + [markdown] colab_type="text" id="tXQxDy_ts_th" # before merge: 48 # .intial: 38 # final : (28, 96) # ..intial: 38 # final : (30, 142) # + [markdown] colab_type="text" id="lyYwEqdojkH2" # ## Unitary tests # Copy to test.py: # ```bash # # # cd tests # python test.py # ``` # + colab={} colab_type="code" id="ugh0hx54ghMH" import unittest self = unittest.TestCase('__init__') self.assertTrue(True) self.assertTrue( cib.WOS.shape[0]+cib.SCI.shape[0]+cib.SCP.shape[0] == 48 ) self.assertTrue ( cib.WOS.shape[0]+cib.SCI.shape[0] == 38 ) self.assertTrue ( cib.WOS_SCI.shape[0] == 28 ) self.assertTrue( cib.WOS_SCI.shape[0]+cib.SCP.shape[0] == 38 ) self.assertTrue( cib.WOS_SCI_SCP.shape[0] == 30 ) self.assertTrue(list( cib.WOS_SCI_SCP.Tipo.values )==['WOS','WOS', 'WOS','WOS','WOS','WOS','WOS_SCI','SCI','WOS','WOS','WOS','WOS', 'WOS','WOS','WOS','WOS','WOS','WOS_SCI','WOS_SCI','WOS_SCI', 'WOS_SCP','WOS_SCI_SCP','WOS_SCI_SCP','WOS_SCI_SCP','WOS_SCP', 'WOS_SCI_SCP','WOS_SCI_SCP','WOS_SCI_SCP','SCP','SCP']) # + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="7746G3jUa4dr" outputId="44e3cdc2-43de-4b41-f7cc-d3138521fb45" language="bash" # cd tests # python3 test.py # + [markdown] colab_type="text" id="WWNF7iZkPbr9" # ### In progress... Other database # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="mwVr3x5cbwvY" outputId="63879690-c517-462b-9ef6-1f43ab877d18" cib.load_biblio('Sample_SCP.xlsx',prefix='NEW') # + colab={} colab_type="code" id="-PpQ4BfdPh_b" cib.merge(left='WOS_SCI_SCP',right='NEW',right_DOI='NEW_DOI', right_TI='NEW_Title', right_extra_journal='NEW_Source title', right_author='NEW_Authors', right_year='NEW_Year') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="6WWU3mIXPoaL" outputId="fbba0d3c-9c37-4833-e62b-a253ea3ffe2a" cib.WOS_SCI_SCP_NEW.shape # + colab={} colab_type="code" id="4-C0_KEHPpv3" .Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) # Objective function and optimization income = sum(group.loc[t,'HOEP_lag24'] * model.Eout[t] for t in model.T) expenses = sum(group.loc[t,'HOEP_lag24'] * model.Ein[t] for t in model.T) profits = (income - expenses) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(group['Datetime']) HOEP = list(group['HOEP']) HOEP_lag24 = list(group['HOEP_lag24']) output.append([Date,HOEP,HOEP_lag24,model.Ein.get_values().values(),model.Eout.get_values().values(), model.Z.get_values().values()]) list_SOC = list(model.Z.get_values().values()) # transform the SOC values for each t in a list last_period_end_SOC = list_SOC[-1] # Extract the SOC for the last t of the period and store it Profits.append(model.objective()) df_results = pd.DataFrame(output) df_results.rename(columns = {0: 'Date', 1: 'HOEP', 2:'HOEP_lag24', 3: 'Ein', 4:'Eout', 5:'Z', 6:'Load'}, inplace = True) # Present final results in dataframe d = hoep = ein = eout = z = HOEP_lag24 = [] # create empty list and store content for i in list(df_results.index): d = d + list(df_results.loc[i,'Date']) hoep = hoep + list(df_results.loc[i,'HOEP']) HOEP_lag24 = HOEP_lag24 + list(df_results.loc[i,'HOEP_lag24']) ein = ein + list(df_results.loc[i,'Ein']) eout = eout + list(df_results.loc[i,'Eout']) z = z + list(df_results.loc[i,'Z']) results = pd.DataFrame(zip(d, hoep, HOEP_lag24, ein, eout, z), columns = ['Date','HOEP', 'HOEP_lag24', 'Ein','Eout','SOC']) results['real_profits'] = results.Eout * results.HOEP - results.Ein * results.HOEP results['opt_profits'] = results.Eout*results.HOEP_lag24 - results.Ein*results.HOEP_lag24 print("Results for year starting:", dataframe.loc[1,'Date']) print("Running time:", datetime.now() - start_time) print("For duration =", duration,",","Profits with backcasting prices =", sum(Profits)/1000) print("For duration =", duration,",","real Profits with actual HOEP =", results['real_profits'].sum()/1000) bc_2018_24h = results['real_profits'].sum()/1000 results # + ########################################## ################# 2018 ################## ### OPTIMIZATION WITH 1 WEEK PLANNING ### ########################################## ## results shown in Table 5.2 in the thesis (HOEP retardé 168h) start_time = datetime.now() list_horizons = ["1d","7d", "14d", "12h", "6h", "2d"] horizon = list_horizons[1] df = df_2018.loc[:8759] df = df.reset_index() # Créer un DatetimeIndex pour pouvoir utiliser df.groupby et pd.Grouper ensuite df['Datetime'] = pd.to_datetime(df.Date) + pd.to_timedelta(df.Hour, unit='h') grouped = df.set_index('Datetime').groupby(pd.Grouper(freq=horizon)) duration = 12 output = [] # create empty lists and store results in it during the loop Profits = [] for name, group in grouped: group.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=group.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.eta = Param(initialize=0.86) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) # Constraints def storage_state(model, t): if group['Datetime'].iloc[0] == df.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == last_period_end_SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): return model.value[t] == (model.Eout[t] - model.Ein[t]) * group.loc[t,'HOEP_lag168'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(group['Datetime']) HOEP = list(group['HOEP']) HOEP_lag360 = list(group['HOEP_lag360']) output.append([Date, HOEP, HOEP_lag360, model.Ein.get_values().values(), model.Eout.get_values().values(), model.Z.get_values().values()]) list_SOC = list(model.Z.get_values().values()) # transform the SOC values for each t in a list last_period_end_SOC = list_SOC[-1] # Extract the SOC for the last t of the period and store it Profits.append(model.objective()) df_results = pd.DataFrame(output) df_results.rename(columns = {0: 'Date', 1: 'HOEP', 2:'HOEP_lag360', 3: 'Ein', 4:'Eout', 5:'Z', }, inplace = True) df_results # we see that doing just so, the dataframe shape is weird and not the way we want. # Present final results in dataframe d = hoep = ein = eout = z = HOEP_lag360 = [] # create empty list and store content of each for i in list(df_results.index): d = d + list(df_results.loc[i,'Date']) hoep = hoep + list(df_results.loc[i,'HOEP']) HOEP_lag360 = HOEP_lag360 + list(df_results.loc[i,'HOEP_lag360']) ein = ein + list(df_results.loc[i,'Ein']) eout = eout + list(df_results.loc[i,'Eout']) z = z + list(df_results.loc[i,'Z']) results = pd.DataFrame(zip(d, hoep, HOEP_lag360, ein, eout, z), columns = ['Date','HOEP','HOEP_lag360','Ein','Eout','SOC']) results['real_profits'] = results.Eout*results.HOEP - results.Ein*results.HOEP print("Results for year:", dataframe.loc[1,'Date']) print("Running time:", datetime.now() - start_time) print("For duration =", duration, "," ,"Perfect foresight","real Profits with actual HOEP =", results['real_profits'].sum()/1000) bc_2019_7d = results['real_profits'].sum()/1000 results # + ########################################## ################# 2018 ################## ### OPTIMIZATION WITH 2 WEEKs PLANNING ### ########################################## ## results shown in Table 5.2 in the thesis (HOEP retardé 360h) start_time = datetime.now() list_horizons = ["1d","7d", "14d", "12h", "6h", "2d"] horizon = list_horizons[2] dataframe = df_2018.copy() # choose the year of data we want to optimize for df = dataframe.loc[:8759] df = df.reset_index() # Créer un DatetimeIndex pour pouvoir utiliser df.groupby et pd.Grouper ensuite df['Datetime'] = pd.to_datetime(df.Date) + pd.to_timedelta(df.Hour, unit='h') grouped = df.set_index('Datetime').groupby(pd.Grouper(freq=horizon)) duration = 12 output = [] # create empty lists and store results in it during the loop Profits = [] for name, group in grouped: group.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=group.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.eta = Param(initialize=0.86) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) # Constraints def storage_state(model, t): if group['Datetime'].iloc[0] == df.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == last_period_end_SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): return model.value[t] == (model.Eout[t] - model.Ein[t]) * group.loc[t,'HOEP_lag360'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(group['Datetime']) HOEP = list(group['HOEP']) HOEP_lag360 = list(group['HOEP_lag360']) output.append([Date, HOEP, HOEP_lag360, model.Ein.get_values().values(), model.Eout.get_values().values(), model.Z.get_values().values()]) list_SOC = list(model.Z.get_values().values()) # transform the SOC values for each t in a list last_period_end_SOC = list_SOC[-1] # Extract the SOC for the last t of the period and store it Profits.append(model.objective()) df_results = pd.DataFrame(output) df_results.rename(columns = {0: 'Date', 1: 'HOEP', 2:'HOEP_lag360', 3: 'Ein', 4:'Eout', 5:'Z', }, inplace = True) df_results # we see that doing just so, the dataframe shape is weird and not the way we want. # Present final results in dataframe d = hoep = ein = eout = z = HOEP_lag360 = [] # create empty list and store content of each for i in list(df_results.index): d = d + list(df_results.loc[i,'Date']) hoep = hoep + list(df_results.loc[i,'HOEP']) HOEP_lag360 = HOEP_lag360 + list(df_results.loc[i,'HOEP_lag360']) ein = ein + list(df_results.loc[i,'Ein']) eout = eout + list(df_results.loc[i,'Eout']) z = z + list(df_results.loc[i,'Z']) results = pd.DataFrame(zip(d, hoep, HOEP_lag360, ein, eout, z), columns = ['Date','HOEP','HOEP_lag360','Ein','Eout','SOC']) results['real_profits'] = results.Eout*results.HOEP - results.Ein*results.HOEP print("Results for year:", dataframe.loc[1,'Date']) print("Running time:", datetime.now() - start_time) print("For duration =", duration, "," ,"Perfect foresight","real Profits with actual HOEP =", results['real_profits'].sum()/1000) bc_2019_14d = results['real_profits'].sum()/1000 results # + ##################################################### ################# 2018 ############################ ### OPTIMIZATION WITH PF OF THE PRICE WITH 1 year PLANNING ### ########################################################## ## results shown in Table 5.2 in the thesis (HOEP réel) start_time = datetime.now() df = df_2018.copy() # choose the year of data we want to optimize for df = df.reset_index(drop=True) df = df.loc[:8759] start_time = datetime.now() duration = 12 model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.eta = Param(initialize=0.86) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == df.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == last_period_end_SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model #solver = SolverFactory('glpk') solver = SolverFactory('ipopt') solver.solve(model) SOC = list(model.Z.get_values().values()) last_period_end_SOC = SOC[-1] Profits = model.objective() print("EAV with HOEP and 8760h planning:", round(Profits/1000,2)) print("running time:", datetime.now() - start_time) # + ######################################################################################## #################################### 2018 ############################################# ### OPTIMIZATION WITH 12h XGBoost rolling horizon using Model 1 (M1) price forecast ### ####################################################################################### # M1 2018, horizon = 12h start_time = datetime.now() list_Date = [] list_HOEP = [] list_XGB = [] list_XGB12 = [] list_SOC = [] list_Ein = [] list_Eout = [] list_value = [] horizon = 11 duration = 12 for i in range(0,8760): df = data.loc[0+i:horizon+i] df.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.L = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) model.eta = Param(initialize=0.86) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == data.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): if t == model.T.first(): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB'] else: return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB12'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('glpk') solver.solve(model) # Extract model output in list Date = list(df['Datetime'])[0] HOEP = list(df['HOEP'])[0] XGB = list(df['XGB'])[0] XGB12 = list(df['XGB12'])[0] SOC = list(model.Z.get_values().values())[0] Ein= list(model.Ein.get_values().values())[0] Eout = list(model.Eout.get_values().values())[0] value = list(model.value.get_values().values())[0] list_Date.append(Date) list_HOEP.append(HOEP) list_XGB.append(XGB) list_XGB12.append(XGB12) list_SOC.append(SOC) list_Ein.append(Ein) list_Eout.append(Eout) list_value.append(value) # Present final results in a dataframe results = pd.DataFrame(list_Date, columns=['Datetime']) results['HOEP'] = list_HOEP results['XGB'] = list_XGB results['XGB12'] = list_XGB12 results['Ein'] = list_Ein results['Eout'] = list_Eout results['SOC'] = list_SOC results['value'] = list_value results['real_value'] = (results.Eout - results.Ein)*results.HOEP results = results.round(2) remaining_value = (results.loc[results['Eout'] > 0, 'HOEP'].mean() * results.SOC.tail(1)) print("running time:", datetime.now() - start_time) print("real value of storage:", (results.real_value.sum() + remaining_value.item())/1000) XGB_2018 = (results.real_value.sum() + remaining_value.item())/1000 results # + ######################################################################################## #################################### 2018 ############################################# ### OPTIMIZATION WITH 12h XGBoost rolling horizon using Model 2 (M2) price forecast ### ####################################################################################### # M2 2018, horizon = 24h start_time = datetime.now() list_Date = [] list_HOEP = [] list_XGB = [] list_XGB24 = [] list_SOC = [] list_Ein = [] list_Eout = [] list_value = [] horizon = 23 duration = 12 for i in range(0,8760): df = data.loc[0+i:horizon+i] df.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.L = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) model.eta = Param(initialize=0.86) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == data.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): if t == model.T.first(): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB'] else: return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB24'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('glpk') solver.solve(model) # Extract model output in list Date = list(df['Datetime'])[0] HOEP = list(df['HOEP'])[0] XGB = list(df['XGB'])[0] XGB24 = list(df['XGB24'])[0] SOC = list(model.Z.get_values().values())[0] Ein= list(model.Ein.get_values().values())[0] Eout = list(model.Eout.get_values().values())[0] value = list(model.value.get_values().values())[0] list_Date.append(Date) list_HOEP.append(HOEP) list_XGB.append(XGB) list_XGB24.append(XGB24) list_SOC.append(SOC) list_Ein.append(Ein) list_Eout.append(Eout) list_value.append(value) # Present final results in a dataframe results = pd.DataFrame(list_Date, columns=['Datetime']) results['HOEP'] = list_HOEP results['XGB'] = list_XGB results['XGB24'] = list_XGB24 results['Ein'] = list_Ein results['Eout'] = list_Eout results['SOC'] = list_SOC results['value'] = list_value results['real_value'] = (results.Eout - results.Ein)*results.HOEP results = results.round(2) remaining_value = (results.loc[results['Eout'] > 0, 'HOEP'].mean() * results.SOC.tail(1)) print("running time:", datetime.now() - start_time) print("real value of storage:", (results.real_value.sum() + remaining_value.item())/1000) XGB_2018 = (results.real_value.sum() + remaining_value.item())/1000 results # + ######################################################################################## #################################### 2018 ############################################# ### OPTIMIZATION WITH 12h XGBoost rolling horizon using Model 4 (M4) price forecast ### ####################################################################################### # M4 2018, horizon = 168h start_time = datetime.now() list_Date = [] list_HOEP = [] list_XGB = [] list_XGB12 = [] list_HOEP_lag168 = [] list_SOC = [] list_Ein = [] list_Eout = [] list_value = [] horizon = 167 duration = 12 for i in range(0,8760): df = data.loc[0+i:horizon+i] df.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.L = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) model.eta = Param(initialize=0.86) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == data.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): if t == model.T.first(): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB'] else: if t in range(1,12): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB12'] else: return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag168'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(df['Datetime'])[0] HOEP = list(df['HOEP'])[0] XGB = list(df['XGB'])[0] XGB12 = list(df['XGB12'])[0] HOEP_lag168 = list(df['HOEP_lag168'])[0] SOC = list(model.Z.get_values().values())[0] Ein= list(model.Ein.get_values().values())[0] Eout = list(model.Eout.get_values().values())[0] value = list(model.value.get_values().values())[0] list_Date.append(Date) list_HOEP.append(HOEP) list_XGB.append(XGB) list_XGB12.append(XGB12) list_HOEP_lag168.append(HOEP_lag168) list_SOC.append(SOC) list_Ein.append(Ein) list_Eout.append(Eout) list_value.append(value) # Present final results in a dataframe results = pd.DataFrame(list_Date, columns=['Datetime']) results['HOEP'] = list_HOEP results['XGB'] = list_XGB results['XGB12'] = list_XGB12 results['HOEP_lag168'] = list_HOEP_lag168 results['Ein'] = list_Ein results['Eout'] = list_Eout results['SOC'] = list_SOC results['value'] = list_value results['real_value'] = (results.Eout - results.Ein)*results.HOEP results = results.round(2) remaining_value = (results.loc[results['Eout'] > 0, 'HOEP'].mean() * results.SOC.tail(1)) print("running time:", datetime.now() - start_time) print("real value of storage:", (results.real_value.sum() + remaining_value.item())/1000) XGB_2018 = (results.real_value.sum() + remaining_value.item())/1000 results # + ######################################################################################## #################################### 2018 ############################################# ### OPTIMIZATION WITH 12h XGBoost rolling horizon using Model 7 (M7) price forecast ### ####################################################################################### # M7 2018, horizon = 360h start_time = datetime.now() list_Date = [] list_HOEP = [] list_XGB = [] list_XGB12 = [] list_HOEP_lag168 = [] list_HOEP_lag360 = [] list_SOC = [] list_Ein = [] list_Eout = [] list_value = [] horizon = 359 duration = 12 for i in range(0,8760): df = data.loc[0+i:horizon+i] df.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.eta = Param(initialize=0.86) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.L = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == data.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): if t == model.T.first(): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB'] else: if t in range(1,12): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB12'] else: if t in range(13,168): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag168'] else: return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag360'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(df['Datetime'])[0] HOEP = list(df['HOEP'])[0] XGB = list(df['XGB'])[0] XGB12 = list(df['XGB12'])[0] HOEP_lag168 = list(df['HOEP_lag168'])[0] HOEP_lag360 = list(df['HOEP_lag360'])[0] SOC = list(model.Z.get_values().values())[0] Ein= list(model.Ein.get_values().values())[0] Eout = list(model.Eout.get_values().values())[0] value = list(model.value.get_values().values())[0] list_Date.append(Date) list_HOEP.append(HOEP) list_XGB.append(XGB) list_XGB12.append(XGB12) list_HOEP_lag168.append(HOEP_lag168) list_HOEP_lag360.append(HOEP_lag360) list_SOC.append(SOC) list_Ein.append(Ein) list_Eout.append(Eout) list_value.append(value) # Present final results in a dataframe results = pd.DataFrame(list_Date, columns=['Datetime']) results['HOEP'] = list_HOEP results['XGB'] = list_XGB results['XGB12'] = list_XGB12 results['HOEP_lag168'] = list_HOEP_lag168 results['HOEP_lag360'] = list_HOEP_lag360 results['Ein'] = list_Ein results['Eout'] = list_Eout results['SOC'] = list_SOC results['value'] = list_value results['real_value'] = (results.Eout - results.Ein)*results.HOEP results = results.round(2) remaining_value = (results.loc[results['Eout'] > 0, 'HOEP'].mean() * results.SOC.tail(1)) print("running time:", datetime.now() - start_time) print("real value of storage:", (results.real_value.sum() + remaining_value.item())/1000) XGB_2018 = (results.real_value.sum() + remaining_value.item())/1000 results # + ######################################################################################## #################################### 2018 ############################################# ### OPTIMIZATION WITH 12h XGBoost rolling horizon using Model 9 (M9) price forecast ### ####################################################################################### # M9 2018, horizon = 168h start_time = datetime.now() list_Date = [] list_HOEP = [] list_XGB = [] list_XGB12 = [] list_XGB24 = [] list_HOEP_lag168 = [] list_SOC = [] list_Ein = [] list_Eout = [] list_value = [] horizon = 167 duration = 12 for i in range(0,8760): df = data.loc[0+i:horizon+i] df.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.eta = Param(initialize=0.86) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.L = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == data.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): if t == model.T.first(): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB'] else: if t in range(1,12): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB12'] else: if t in range(12,24): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB24'] else: return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag168'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(df['Datetime'])[0] HOEP = list(df['HOEP'])[0] XGB = list(df['XGB'])[0] XGB12 = list(df['XGB12'])[0] XGB24 = list(df['XGB24'])[0] HOEP_lag168 = list(df['HOEP_lag168'])[0] SOC = list(model.Z.get_values().values())[0] Ein= list(model.Ein.get_values().values())[0] Eout = list(model.Eout.get_values().values())[0] value = list(model.value.get_values().values())[0] list_Date.append(Date) list_HOEP.append(HOEP) list_XGB.append(XGB) list_XGB12.append(XGB12) list_XGB24.append(XGB24) list_HOEP_lag168.append(HOEP_lag168) list_SOC.append(SOC) list_Ein.append(Ein) list_Eout.append(Eout) list_value.append(value) # Present final results in a dataframe results = pd.DataFrame(list_Date, columns=['Datetime']) results['HOEP'] = list_HOEP results['XGB'] = list_XGB results['XGB12'] = list_XGB12 results['XGB24'] = list_XGB24 results['HOEP_lag168'] = list_HOEP_lag168 results['Ein'] = list_Ein results['Eout'] = list_Eout results['SOC'] = list_SOC results['value'] = list_value results['real_value'] = (results.Eout - results.Ein)*results.HOEP results = results.round(2) remaining_value = (results.loc[results['Eout'] > 0, 'HOEP'].mean() * results.SOC.tail(1)) print("running time:", datetime.now() - start_time) print("real value of storage:", (results.real_value.sum() + remaining_value.item())/1000) XGB_2018 = (results.real_value.sum() + remaining_value.item())/1000 results # + ######################################################################################## #################################### 2018 ############################################# ### OPTIMIZATION WITH 12h XGBoost rolling horizon using Model 6 (M6) price forecast ### ####################################################################################### # M6 2018, horizon = 168h start_time = datetime.now() list_Date = [] list_HOEP = [] list_XGB = [] list_XGB6 = [] list_XGB12 = [] list_XGB24 = [] list_HOEP_lag168 = [] list_SOC = [] list_Ein = [] list_Eout = [] list_value = [] horizon = 167 duration = 12 for i in range(0,8760): df = data.loc[0+i:horizon+i] df.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.eta = Param(initialize=0.86) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.L = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == data.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): if t == model.T.first(): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB'] else: if t in range(1,6): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB6'] else: if t in range(6,12): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB12'] else: if t in range(12,24): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB24'] else: return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag168'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(df['Datetime'])[0] HOEP = list(df['HOEP'])[0] XGB = list(df['XGB'])[0] XGB6 = list(df['XGB6'])[0] XGB12 = list(df['XGB12'])[0] XGB24 = list(df['XGB24'])[0] HOEP_lag168 = list(df['HOEP_lag168'])[0] SOC = list(model.Z.get_values().values())[0] Ein= list(model.Ein.get_values().values())[0] Eout = list(model.Eout.get_values().values())[0] value = list(model.value.get_values().values())[0] list_Date.append(Date) list_HOEP.append(HOEP) list_XGB.append(XGB) list_XGB6.append(XGB6) list_XGB12.append(XGB12) list_XGB24.append(XGB24) list_HOEP_lag168.append(HOEP_lag168) list_SOC.append(SOC) list_Ein.append(Ein) list_Eout.append(Eout) list_value.append(value) # Present final results in a dataframe results = pd.DataFrame(list_Date, columns=['Datetime']) results['HOEP'] = list_HOEP results['XGB'] = list_XGB results['XGB6'] = list_XGB6 results['XGB12'] = list_XGB12 results['XGB24'] = list_XGB24 results['HOEP_lag168'] = list_HOEP_lag168 results['Ein'] = list_Ein results['Eout'] = list_Eout results['SOC'] = list_SOC results['value'] = list_value results['real_value'] = (results.Eout - results.Ein)*results.HOEP results = results.round(2) remaining_value = (results.loc[results['Eout'] > 0, 'HOEP'].mean() * results.SOC.tail(1)) print("running time:", datetime.now() - start_time) print("real value of storage:", (results.real_value.sum() + remaining_value.item())/1000) XGB_2018 = (results.real_value.sum() + remaining_value.item())/1000 results # + ######################################################################################## #################################### 2018 ############################################# ### OPTIMIZATION WITH 12h XGBoost rolling horizon using Model 5 (M5) price forecast ### ####################################################################################### # M5 2018, horizon = 360h start_time = datetime.now() list_Date = [] list_HOEP = [] list_XGB = [] list_XGB24 = [] list_HOEP_lag168 = [] list_HOEP_lag360 = [] list_SOC = [] list_Ein = [] list_Eout = [] list_value = [] horizon = 359 duration = 12 for i in range(0,8760): df = data.loc[0+i:horizon+i] df.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.eta = Param(initialize=0.86) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.L = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == data.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): if t == model.T.first(): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB'] else: if t in range(1,24): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB24'] else: if t in range(24,168): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag168'] else: return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag360'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(df['Datetime'])[0] HOEP = list(df['HOEP'])[0] XGB = list(df['XGB'])[0] XGB24 = list(df['XGB24'])[0] HOEP_lag168 = list(df['HOEP_lag168'])[0] HOEP_lag360 = list(df['HOEP_lag360'])[0] SOC = list(model.Z.get_values().values())[0] Ein= list(model.Ein.get_values().values())[0] Eout = list(model.Eout.get_values().values())[0] value = list(model.value.get_values().values())[0] list_Date.append(Date) list_HOEP.append(HOEP) list_XGB.append(XGB) list_XGB24.append(XGB24) list_HOEP_lag168.append(HOEP_lag168) list_HOEP_lag360.append(HOEP_lag360) list_SOC.append(SOC) list_Ein.append(Ein) list_Eout.append(Eout) list_value.append(value) # Present final results in a dataframe results = pd.DataFrame(list_Date, columns=['Datetime']) results['HOEP'] = list_HOEP results['XGB'] = list_XGB results['XGB24'] = list_XGB24 results['HOEP_lag168'] = list_HOEP_lag168 results['HOEP_lag360'] = list_HOEP_lag360 results['Ein'] = list_Ein results['Eout'] = list_Eout results['SOC'] = list_SOC results['value'] = list_value results['real_value'] = (results.Eout - results.Ein)*results.HOEP results = results.round(2) remaining_value = (results.loc[results['Eout'] > 0, 'HOEP'].mean() * results.SOC.tail(1)) print("running time:", datetime.now() - start_time) print("real value of storage:", (results.real_value.sum() + remaining_value.item())/1000) XGB_2018 = (results.real_value.sum() + remaining_value.item())/1000 results # + ######################################################################################## #################################### 2018 ############################################# ### OPTIMIZATION WITH 12h XGBoost rolling horizon using Model 3 (M3) price forecast ### ####################################################################################### # M3 2018, horizon = 168h start_time = datetime.now() list_Date = [] list_HOEP = [] list_XGB12 = [] list_HOEP_lag168 = [] list_SOC = [] list_Ein = [] list_Eout = [] list_value = [] horizon = 167 duration = 12 for i in range(0,8760): df = data.loc[0+i:horizon+i] df.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.eta = Param(initialize=0.86) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.L = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == data.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): if t in range(0,12): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB12'] else: return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag168'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(df['Datetime'])[0] HOEP = list(df['HOEP'])[0] XGB12 = list(df['XGB12'])[0] HOEP_lag168 = list(df['HOEP_lag168'])[0] SOC = list(model.Z.get_values().values())[0] Ein= list(model.Ein.get_values().values())[0] Eout = list(model.Eout.get_values().values())[0] value = list(model.value.get_values().values())[0] list_Date.append(Date) list_HOEP.append(HOEP) list_XGB12.append(XGB12) list_HOEP_lag168.append(HOEP_lag168) list_SOC.append(SOC) list_Ein.append(Ein) list_Eout.append(Eout) list_value.append(value) # Present final results in a dataframe results = pd.DataFrame(list_Date, columns=['Datetime']) results['HOEP'] = list_HOEP results['XGB12'] = list_XGB12 results['HOEP_lag168'] = list_HOEP_lag168 results['Ein'] = list_Ein results['Eout'] = list_Eout results['SOC'] = list_SOC results['value'] = list_value results['real_value'] = (results.Eout - results.Ein)*results.HOEP results = results.round(2) remaining_value = (results.loc[results['Eout'] > 0, 'HOEP'].mean() * results.SOC.tail(1)) print("running time:", datetime.now() - start_time) print("real value of storage:", (results.real_value.sum() + remaining_value.item())/1000) XGB_2018 = (results.real_value.sum() + remaining_value.item())/1000 results # + ######################################################################################## #################################### 2018 ############################################# ### OPTIMIZATION WITH 12h XGBoost rolling horizon using Model 8 (M8) price forecast ### ####################################################################################### # M8 2018, horizon = 360h start_time = datetime.now() list_Date = [] list_HOEP = [] list_XGB = [] list_XGB12 = [] list_XGB24 = [] list_HOEP_lag168 = [] list_HOEP_lag360 = [] list_SOC = [] list_Ein = [] list_Eout = [] list_value = [] horizon = 359 duration = 12 for i in range(0,8760): df = data.loc[0+i:horizon+i] df.reset_index(inplace=True) model = ConcreteModel() # Variables of the model model.T = Set(initialize=df.index, ordered=True) model.Rmax = Param(initialize=1, within=Any) model.Smax = Param(initialize=duration, within=Any) model.Dmax = Param(initialize=duration, within=Any) model.eta = Param(initialize=0.86) model.Ein = Var(model.T, domain=NonNegativeReals) model.Eout = Var(model.T, domain=NonNegativeReals) model.Z = Var(model.T, domain=NonNegativeReals) model.L = Var(model.T, domain=NonNegativeReals) model.value = Var(model.T) # Constraints def storage_state(model, t): if df['Datetime'].iloc[0] == data.Datetime[0]: if t == model.T.first(): return model.Z[t] == duration/2 else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] else: if t == model.T.first(): return model.Z[t] == SOC + model.Ein[t]*model.eta - model.Eout[t] else: return model.Z[t] == model.Z[t-1] + model.Ein[t]*model.eta - model.Eout[t] model.charge_state = Constraint(model.T, rule=storage_state) def discharge_constraint(model, t): return model.Eout[t] <= model.Rmax model.discharge = Constraint(model.T, rule=discharge_constraint) def charge_constraint(model, t): return model.Ein[t] <= model.Rmax model.charge = Constraint(model.T, rule=charge_constraint) def positive_charge(model, t): return model.Eout[t] <= model.Z[t] model.positive_charge = Constraint(model.T, rule=positive_charge) def max_SOC(model, t): return model.Z[t] <= model.Smax model.max_SOC = Constraint(model.T, rule=max_SOC) def value_constraint(model, t): if t == model.T.first(): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB'] else: if t in range(1,12): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB12'] else: if t in range(12,24): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'XGB24'] else: if t in range(24,168): return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag168'] else: return model.value[t] == (model.Eout[t] - model.Ein[t]) * df.loc[t,'HOEP_lag360'] model.value_constraint = Constraint(model.T, rule=value_constraint) # Objective function and optimization profits = sum(model.value[t] for t in model.T) model.objective = Objective(expr=profits, sense=maximize) # Solve model solver = SolverFactory('ipopt') solver.solve(model) # Extract model output in list Date = list(df['Datetime'])[0] HOEP = list(df['HOEP'])[0] XGB = list(df['XGB'])[0] XGB12 = list(df['XGB12'])[0] XGB24 = list(df['XGB24'])[0] HOEP_lag168 = list(df['HOEP_lag168'])[0] HOEP_lag360 = list(df['HOEP_lag360'])[0] SOC = list(model.Z.get_values().values())[0] Ein= list(model.Ein.get_values().values())[0] Eout = list(model.Eout.get_values().values())[0] value = list(model.value.get_values().values())[0] list_Date.append(Date) list_HOEP.append(HOEP) list_XGB.append(XGB) list_XGB12.append(XGB12) list_XGB24.append(XGB24) list_HOEP_lag168.append(HOEP_lag168) list_HOEP_lag360.append(HOEP_lag360) list_SOC.append(SOC) list_Ein.append(Ein) list_Eout.append(Eout) list_value.append(value) # Present final results in a dataframe results = pd.DataFrame(list_Date, columns=['Datetime']) results['HOEP'] = list_HOEP results['XGB'] = list_XGB results['XGB12'] = list_XGB12 results['XGB24'] = list_XGB24 results['HOEP_lag168'] = list_HOEP_lag168 results['HOEP_lag360'] = list_HOEP_lag360 results['Ein'] = list_Ein results['Eout'] = list_Eout results['SOC'] = list_SOC results['value'] = list_value results['real_value'] = (results.Eout - results.Ein)*results.HOEP results = results.round(2) remaining_value = (results.loc[results['Eout'] > 0, 'HOEP'].mean() * results.SOC.tail(1)) print("running time:", datetime.now() - start_time) print("real value of storage:", (results.real_value.sum() + remaining_value.item())/1000) XGB_2018 = (results.real_value.sum() + remaining_value.item())/1000 results # + from sklearn.metrics import mean_squared_error from numpy import corrcoef import matplotlib.pyplot as plt df = data.loc[:8759] RMSE_lag24 = np.sqrt(mean_squared_error(df.HOEP,df.HOEP_lag24)) print("RMSE HOEP_lag24:",round(RMSE_lag24,2),"Correlation:", corrcoef(df.HOEP,df.HOEP_lag24)[0,1]) RMSE_lag168 = np.sqrt(mean_squared_error(df.HOEP,df.HOEP_lag168)) print("RMSE HOEP_lag168:",round(RMSE_lag168,2),"Correlation:", corrcoef(df.HOEP,df.HOEP_lag168)[0,1]) RMSE_lag360 = np.sqrt(mean_squared_error(df.HOEP,df.HOEP_lag360)) print("RMSE HOEP_lag360:",round(RMSE_lag360,2), "Correlation:", corrcoef(df.HOEP,df.HOEP_lag360)[0,1]) RMSE_XGB = np.sqrt(mean_squared_error(df.HOEP,df.XGB)) print("RMSE XGB:",round(RMSE_XGB,2),"Correlation:", corrcoef(df.HOEP,df.XGB)[0,1]) RMSE_XGB6 = np.sqrt(mean_squared_error(df.HOEP,df.XGB6)) print("RMSE XGB6:",round(RMSE_XGB6,2),"Correlation:", corrcoef(df.HOEP,df.XGB6)[0,1]) RMSE_XGB12 = np.sqrt(mean_squared_error(df.HOEP,df.XGB12)) print("RMSE XGB12:",round(RMSE_XGB12,2),"Correlation:", corrcoef(df.HOEP,df.XGB12)[0,1]) RMSE_XGB24 = np.sqrt(mean_squared_error(df.HOEP,df.XGB24)) print("RMSE XGB24:",round(RMSE_XGB24,2),"Correlation:", corrcoef(df.HOEP,df.XGB24)[0,1]) test = df.loc[4700:4750] plt.plot(test.HOEP, label = 'HOEP') plt.plot(test.HOEP_lag24, label = '.HOEP_lag24') plt.plot(test.XGB, label = 'XGB') plt.plot(test.XGB6, label = 'XGB6') plt.legend(loc='best') plt.show()
63,038
/notebooks/02-transformers.ipynb
81615b501e4456362f8a62d65899ca3c65f406a7
[ "Apache-2.0" ]
permissive
GaoZikai/transformers
https://github.com/GaoZikai/transformers
1
0
Apache-2.0
2020-06-22T10:52:19
2020-06-22T10:50:53
null
Jupyter Notebook
false
false
.py
22,131
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot import re pd.options.display.max_rows = 1000 pd.options.display.max_columns = 300 pwd inventory = pd.read_csv('../UNO_inventory_item.csv') sale = pd.read_csv('../UNO_Sale_vs_RCVD.csv') sale.head(5) inventory = inventory.drop(['CB Reason', 'ToWH'], axis = 1) inventory.Description = inventory.Description.str.replace('\d+', '') inventory.Description = inventory.Description.str.replace('[^A-Za-z0-9\s]+', '') inventory = inventory.drop(inventory.index[[104821,104822,104823,104824,104825,104826,104826,104828,104829]]) #REMOVE ALL THE NUMBER IN THE COLUMN sale.DESCRIPTION = sale.DESCRIPTION.str.replace('\d+', '') #REMOVE ALL THE SPECIAL CHARACTER WITHOUT REMOVING SPACE (\s means space) sale.DESCRIPTION = sale.DESCRIPTION.str.replace('[^A-Za-z0-9\s]+', '') #Remove all space at the front and at the back sale.DESCRIPTION = sale.DESCRIPTION.str.lower().str.strip() sale = sale.drop(sale.index[[52522,108817,108818,108819,108820]]) sale['Categories'] = sale['DESCRIPTION'].str.split() def up1(x): return x[-7:] def up2(y): return list(map(up1,y)) sale['Categories1'] = up2(sale['Categories']) sale.loc[sale.Categories1.str.contains('earrings'), 'Categories1'] = 'Earrings' sale.loc[sale.Categories1.str.contains('earring'), 'Categories1'] = 'Earrings' sale.loc[sale.Categories1.str.contains('studs'), 'Categories1'] = 'Earrings' sale.loc[sale.Categories1.str.contains('ear'), 'Categories1'] = 'Earrings' sale.loc[sale.Categories1.str.contains('studs'), 'Categories1'] = 'Earrings' sale.loc[sale.Categories1.str.contains('nk'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('pendant'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('necklace'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('y shape glass bead'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('natural color wood beads with suede'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('choker'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('natural mini stone bar'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('cross'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('pend'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('neck'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('pendent'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('locket'), 'Categories1'] = 'Necklace' sale.loc[sale.Categories1.str.contains('br'), 'Categories1'] = 'Bracelet' sale.loc[sale.Categories1.str.contains('bl'), 'Categories1'] = 'Bracelet' sale.loc[sale.Categories1.str.contains('color wood beads with natural stone'), 'Categories1'] = 'Bracelet' sale.loc[sale.Categories1.str.contains('tassel'), 'Categories1'] = 'Tassel' sale.loc[sale.Categories1.str.contains('tassle'), 'Categories1'] = 'Tassel' sale.loc[sale.Categories1.str.contains('tsl'), 'Categories1'] = 'Tassel' sale.loc[sale.Categories1.str.contains('ring'), 'Categories1'] = 'Ring' sale.loc[sale.Categories1.str.contains('charm'), 'Categories1'] = 'Charm' sale.loc[sale.Categories1.str.contains('crystal beads w natural stone'), 'Categories1'] = 'Charm' sale.loc[sale.Categories1.str.contains('color beads x stones'), 'Categories1'] = 'Charm' sale.loc[sale.Categories1.str.contains('chrm'), 'Categories1'] = 'Charm' category = ['Accessories', 'Earrings', 'Necklace', 'Bracelet', 'Tassel', 'Ring', 'Charm'] sale.loc[~sale['Categories1'].isin(category), 'Categories1']='others' sale['Categories1'].value_counts() sale['Attribute'] = up2(sale['Categories']) sale.loc[sale.material_style.str.contains('stone'), 'material_style'] = 'nature_stone' sale.loc[sale.material_style.str.contains('bead'), 'material_style'] = 'bead' sale.loc[sale.material_style.str.contains('rose gold'), 'material_style'] = 'gold' sale.loc[sale.material_style.str.contains('rose gold'), 'material_style'] = 'gold' sale.loc[sale.material_style.str.contains('gold'), 'material_style'] = 'gold' sale.loc[sale.material_style.str.contains('metal'), 'material_style'] = 'metal' sale.loc[sale.material_style.str.contains('leather'), 'material_style'] = 'leather' sale.loc[sale.material_style.str.contains('leatehr'), 'material_style'] = 'leather' sale.loc[sale.material_style.str.contains('pearl'), 'material_style'] = 'pearl' sale.loc[sale.material_style.str.contains('drop'), 'material_style'] = 'gemstone' sale.loc[sale.material_style.str.contains('teardrop'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('tear drop'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('silk'), 'material_style'] = 'cotton' sale.loc[sale.material_style.str.contains('cutton'), 'material_style'] = 'cotton' sale.loc[sale.material_style.str.contains('cotton'), 'material_style'] = 'cotton' sale.loc[sale.material_style.str.contains('wood'), 'material_style'] = 'wood' sale.loc[sale.material_style.str.contains('shell'), 'material_style'] = 'shell' sale.loc[sale.material_style.str.contains('wire'), 'material_style'] = 'wired' sale.loc[sale.material_style.str.contains('wired'), 'material_style'] = 'wired' sale.loc[sale.material_style.str.contains('long'), 'material_style'] = 'long' sale.loc[sale.material_style.str.contains('short'), 'material_style'] = 'short' sale.loc[sale.material_style.str.contains('druzy'), 'material_style'] = 'druzy' sale.loc[sale.material_style.str.contains('shape'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('wrapping'), 'material_style'] = 'wrap' sale.loc[sale.material_style.str.contains('wrapped'), 'material_style'] = 'wrap' sale.loc[sale.material_style.str.contains('hook'), 'material_style'] = 'hook' sale.loc[sale.material_style.str.contains('glass'), 'material_style'] = 'glass' sale.loc[sale.material_style.str.contains('word'), 'material_style'] = 'word' sale.loc[sale.material_style.str.contains('text'), 'material_style'] = 'word' sale.loc[sale.material_style.str.contains('feather'), 'material_style'] = 'feather' sale.loc[sale.material_style.str.contains('stretch'), 'material_style'] = 'stretch' sale.loc[sale.material_style.str.contains('round'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('circle'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('oval'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('rect'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('crystal'), 'material_style'] = 'crystal' sale.loc[sale.material_style.str.contains('orbital'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('flower'), 'material_style'] = 'flower-styled' sale.loc[sale.material_style.str.contains('leaf'), 'material_style'] = 'flower-styled' sale.loc[sale.material_style.str.contains('disk'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('ball'), 'material_style'] = 'shape_something' sale.loc[sale.material_style.str.contains('arrow'), 'material_style'] = 'shape_something' distinct = ['glass','word','feather','stretch','shape_something','crystal','flower-styled','hook','wrap','shape_something','druzy','short','long','chain','stone','bead','wired', 'rose gold','gold','metal','leather','pearl','gemstone','cotton','wood','shell'] sale.loc[~sale['material_style'].isin(distinct), 'material_style']='others' sale.to_csv('sale_w_categories.csv') sale.to_pickle('sale_w_categories.pkl') inventory.to_pickle('inventory.pkl') inventory.Description = inventory.Description.str.lower().str.strip() inventory['Categories'] = inventory['Description'].str.split() inventory.Categories = inventory.Categories.str.replace('[^A-Za-z0-9\s]+', '') inventory.Categories = sale.Categories.str.replace('\d+', '') inventory = inventory.dropna(thresh=2) inventory = inventory[inventory.Description != ''] inventory['Categories1'] = up2(sale['Categories']) hs (see below). # + pycharm={"is_executing": false} # Single segment input single_seg_input = tokenizer.encode_plus("This is a sample input") # Multiple segment input multi_seg_input = tokenizer.encode_plus("This is segment A", "This is segment B") print("Single segment token (str): {}".format(tokenizer.convert_ids_to_tokens(single_seg_input['input_ids']))) print("Single segment token (int): {}".format(single_seg_input['input_ids'])) print("Single segment type : {}".format(single_seg_input['token_type_ids'])) # Segments are concatened in the input to the model, with print() print("Multi segment token (str): {}".format(tokenizer.convert_ids_to_tokens(multi_seg_input['input_ids']))) print("Multi segment token (int): {}".format(multi_seg_input['input_ids'])) print("Multi segment type : {}".format(multi_seg_input['token_type_ids'])) # + pycharm={"is_executing": false} # Padding highlight tokens = tokenizer.batch_encode_plus( ["This is a sample", "This is another longer sample text"], pad_to_max_length=True # First sentence will have some PADDED tokens to match second sequence length ) for i in range(2): print("Tokens (int) : {}".format(tokens['input_ids'][i])) print("Tokens (str) : {}".format([tokenizer.convert_ids_to_tokens(s) for s in tokens['input_ids'][i]])) print("Tokens (attn_mask): {}".format(tokens['attention_mask'][i])) print() # - # ## Frameworks interoperability # # One of the most powerfull feature of transformers is its ability to seamlessly move from PyTorch to Tensorflow # without pain for the user. # # We provide some convenient methods to load TensorFlow pretrained weight insinde a PyTorch model and opposite. # + pycharm={"is_executing": false} from transformers import TFBertModel, BertModel # Let's load a BERT model for TensorFlow and PyTorch model_tf = TFBertModel.from_pretrained('bert-base-cased') model_pt = BertModel.from_pretrained('bert-base-cased') # + pycharm={"is_executing": false} # transformers generates a ready to use dictionary with all the required parameters for the specific framework. input_tf = tokenizer.encode_plus("This is a sample input", return_tensors="tf") input_pt = tokenizer.encode_plus("This is a sample input", return_tensors="pt") # Let's compare the outputs output_tf, output_pt = model_tf(input_tf), model_pt(**input_pt) # Models outputs 2 values (The value for each tokens, the pooled representation of the input sentence) # Here we compare the output differences between PyTorch and TensorFlow. for name, o_tf, o_pt in zip(["output", "pooled"], output_tf, output_pt): print("{} differences: {:.5}".format(name, (o_tf.numpy() - o_pt.numpy()).sum())) # + [markdown] pycharm={"name": "#%% md\n"} # ## Want it lighter? Faster? Let's talk distillation! # # One of the main concerns when using these Transformer based models is the computational power they require. All over this notebook we are using BERT model as it can be run on common machines but that's not the case for all of the models. # # For example, Google released a few months ago **T5** an Encoder/Decoder architecture based on Transformer and available in `transformers` with no more than 11 billions parameters. Microsoft also recently entered the game with **Turing-NLG** using 17 billions parameters. This kind of model requires tens of gigabytes to store the weights and a tremendous compute infrastructure to run such models which makes it impracticable for the common man ! # # ![transformers-parameters](https://lh5.googleusercontent.com/NRdXzEcgZV3ooykjIaTm9uvbr9QnSjDQHHAHb2kk_Lm9lIF0AhS-PJdXGzpcBDztax922XAp386hyNmWZYsZC1lUN2r4Ip5p9v-PHO19-jevRGg4iQFxgv5Olq4DWaqSA_8ptep7) # # With the goal of making Transformer-based NLP accessible to everyone we @huggingface developed models that take advantage of a training process called **Distillation** which allows us to drastically reduce the resources needed to run such models with almost zero drop in performances. # # Going over the whole Distillation process is out of the scope of this notebook, but if you want more information on the subject you may refer to [this Medium article written by my colleague Victor SANH, author of DistilBERT paper](https://medium.com/huggingface/distilbert-8cf3380435b5), you might also want to directly have a look at the paper [(Sanh & al., 2019)](https://arxiv.org/abs/1910.01108) # # Of course, in `transformers` we have distilled some models and made them available directly in the library ! # + pycharm={"is_executing": false} from transformers import DistilBertModel bert_distil = DistilBertModel.from_pretrained('distilbert-base-cased') input_pt = tokenizer.encode_plus( 'This is a sample input to demonstrate performance of distiled models especially inference time', return_tensors="pt" ) # %time _ = bert_distil(input_pt['input_ids']) # %time _ = model_pt(input_pt['input_ids']) # - # ## Community provided models # # Last but not least, earlier in this notebook we introduced Hugging Face `transformers` as a repository for the NLP community to exchange pretrained models. We wanted to highlight this features and all the possibilities it offers for the end-user. # # To leverage community pretrained models, just provide the organisation name and name of the model to `from_pretrained` and it will do all the magic for you ! # # # We currently have more 50 models provided by the community and more are added every day, don't hesitate to give it a try ! # + pycharm={"is_executing": false} # Let's load German BERT from the Bavarian State Library de_bert = BertModel.from_pretrained("dbmdz/bert-base-german-cased") de_tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-german-cased") de_input = de_tokenizer.encode_plus( "Hugging Face ist eine französische Firma mit Sitz in New-York.", return_tensors="pt" ) print("Tokens (int) : {}".format(de_input['input_ids'].tolist()[0])) print("Tokens (str) : {}".format([de_tokenizer.convert_ids_to_tokens(s) for s in de_input['input_ids'].tolist()[0]])) print("Tokens (attn_mask): {}".format(de_input['attention_mask'].tolist()[0])) print() output_de, pooled_de = de_bert(**de_input) print("Token wise output: {}, Pooled output: {}".format(outputs.shape, pooled.shape))
14,631
/lesson1/installation_guide.ipynb
8b6149e0be07e5f0ef67ef90e3388ffa5ba4d64a
[]
no_license
morkov/express_ml
https://github.com/morkov/express_ml
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
7,066
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] deletable=true editable=true # #### Не забудьте докрутить до конца! Там есть еще пару инструкций и котик # + [markdown] deletable=true editable=true # ## Ubuntu: # # Для сохранения психического здоровья, используйте Ubuntu 12.04 LTS или выше. # # Устанавливаем необходимые тулзы для Python: # >`$ sudo apt-get update`<br> # >`$ sudo apt-get install python-pip python-dev python-virtualenv build-essential` # # Устанавливаем пакеты, необходимые для сборки NumPy, SciPy и Matplotlib: # # >`$ sudo apt-get install libatlas-base-dev gfortran`<br> # >`$ sudo apt-get build-dep python-matplotlib`<br> # # Создаем виртуальное окружение питона (virtualenv): # >`$ virtualenv ml` # # Активируем его: # >`$ source ml/bin/activate` # # (в консоли должен появиться соотв. префикс) # # >`(ml)$ pip install --upgrade pip`<br> # >`(ml)$ pip install jupyter`<br> # >`(ml)$ pip install numpy scipy pandas matplotlib scikit-learn seaborn`<br> # # # + [markdown] deletable=true editable=true # ## MacOS: # # Придется поставить <a href="http://brew.sh/" target="_blank">Homebrew</a> с вытекающей от туда установкой <a href="https://developer.apple.com/downloads/" target="_blank">Command Line Tools</a> или <a href="https://developer.apple.com/xcode/" target="_blank">XCode</a>. Следуйте указаниям с сайта Homebrew и у вас все получится. # # Далее устанавливаем свежую версию Python и virtualenv: # # >`$ brew install python --with-brewed-openssl`<br> # # Устанавливаем фортран (нужен для сборки NumPy и SciPy): # # >`$ brew install gcc pkg-config freetype` # # Создаем виртуальное окружение: # # >`$ pip install virtualenv`<br> # >`$ virtualenv shad-env`<br> # # Активируем его: # >`$ source ml/bin/activate`<br> # # Устанавливаем необходимые пакеты питона: # # >`(ml)$ pip install jupyter numpy scipy pandas scikit-learn matplotlib seaborn` # + [markdown] deletable=true editable=true # ## Windows: # # <a href="http://python-xy.github.io" target="_blank">Python(x,y)</a> # # Говорят, что иногда бывают проблемы. Чтобы проверить, что все выполнилось хорошо, в питоне запускаем юнит-тесты: # >`import numpy`<br> # >`numpy.test()`<br> # # Аналогично с pandas, pylab, sklearn. # # Если вдруг вываливается ошибка, то можно попробовать установить Python и библиотеки другим способом. # # Минимальный набор для работы: # # Python 2.7 # Jupiter Notebook # NumPy # Matplotlib # Pandas # SciKit-Learn # # Лучше ставить 32-битные версии, поскольку 64-битные не всегда работают корректно под Windows. Обратите внимание, что некоторые библиотеки будут иметь дополнительные зависимости в виде других библиотек, их тоже придется поставить. # + [markdown] deletable=true editable=true # ## Использование virtualenv # # # Virtualenv позволяет заключить в отдельный каталог необходимые версии python-пакетов и использовать только их. Используя virtualenv, Вы можете устанавливать свежие версии пакетов из Python Package Index, при этом не огребать проблем с несовместимостью версий пакетов с установленными в системе. Нормальным решением также является установка python-пакетов через pip в системные каталоги. Для этого не нужно ничего мутить с virtualenv, но запускать pip при этом следует от рута: # # > `$ sudo pip install juptyter numpy scipy pandas matplotlib scikit-learn` # # Но напоминаем, пакеты могут конфликтовать с системными, может фейлиться сборка, могут импортироваться старые версии и возникать другие проблемы... # # Для создания виртуального окружения необходимо сказать # # > `$ virtualenv yourenv` # # при этом будет создан каталог `yourenv` с чистым окружением без каких либо пакетов. Для использования виртуального окружения можно использовать команды из соответствующего каталога: # # > `$ yourenv/bin/python script.py`<br> # > `$ yourenv/bin/pip install ... # установка пакетов в виртуальное окружение`<br> # > `$ yourenv/bin/ipython` # # Для того чтобы не говорить префикс `yourenv/bin` удобно в текущей сесии командной строки выставить необходимые переменные окружения (активировать виртуальное окружение): # # > `$ source yourenv/bin/activate`<br> # > `(yourenv)$ pip install ... # установка пакетов в виртуальное окружение`<br> # > `(yourenv)$ jupiter notebook` # # После активации, у приглашения командной строки появится префикс `(yourenv)`. # # Для того, чтобы деактивировать виртуальное окружение, необходимо сказать # # > `(yourenv)$ deactivate`<br> # > `$ python # префикс пропал, python будет выполняться в системном окружении` # # + [markdown] deletable=true editable=true # ## Запуск Jupyter # # В директории, где вы хотите чтобы был корень, нужно запустить команду # # >`(ml)$ jupyter notebook`<br> # # Удобно запускать ноутбук под nohup # # >`(ml)$ nohup jupyter notebook`<br> # # это специальная команда, которая перехватывает сигнал потери связи SIGHUP. Теперь вы можете закрыть консоль и не беспокоиться, что ноутбук упадет. Правда, убивать ноутбук придется через kill или менеджер задач # - # ## XGBoost: # http://xgboost.readthedocs.io/en/latest/build.html # + [markdown] deletable=true editable=true # <img src="http://www.linusakesson.net/programming/kernighans-lever/cat.png">
5,395
/prefiltering-intensity-threshold.ipynb
7b72ddc7df4d993eb17500dc648a400da5ff4672
[]
no_license
atennapel/cs4065-project
https://github.com/atennapel/cs4065-project
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
14,237
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + from __future__ import print_function import numpy as np #for mathematical operations import matplotlib.pyplot as plt #for plotting output import IPython.display #for audio output import librosa #for audio import math from collections import Counter import librosa.display #for visualisation # %matplotlib inline #http://ibeat.org/piano-chords-free/ #http://biblio.telecom-paristech.fr/cgi-bin/download.cgi?id=10050 #https://hal.archives-ouvertes.fr/hal-00656352/document #http://www.nyu.edu/classes/bello/MIR_files/tonality.pdf # + #Load file for chord detection audio_sample = '/home/student/data/cs4065/demo/Hardstyle.mp3' audio_sample_y, audio_sample_sr = librosa.load(audio_sample) y_harmonic, y_percussive = librosa.effects.hpss(audio_sample_y) # Get the duration of the song (float) duration = librosa.get_duration(y=audio_sample_y, sr=audio_sample_sr) # + audio_sample_chroma = librosa.feature.chroma_cqt(y=y_harmonic, sr=audio_sample_sr) # Make a new figure #plt.figure(figsize=(12,4)) # Display the chromagram #librosa.display.specshow(audio_sample_chroma, sr=audio_sample_sr, x_axis='time', y_axis='chroma', vmin=0, vmax=1) #plt.title('Chromagram') #plt.colorbar() #plt.tight_layout() # + # The amount of seconds each slice will be put N = 10 # The Threshold on which notes will be counted THRESHOLD = 0.9 # Transpose chroma so that it becomes [Timeslice][Notes] chroma_transposed = np.transpose(audio_sample_chroma) # Number of samples per 'N' seconds nsamples = int((len(chroma_transposed) / duration) * N) print("Number of samples per slice {}".format(nsamples)) #number of periods of 'N' seconds nperiods = int(math.ceil(len(chroma_transposed) / nsamples)) print("Number of periods per slice {}".format(nperiods)) pstart = 0 #start of the period pend = pstart + nsamples periods = np.zeros((nperiods, nsamples, 12)) chroma = np.zeros((nperiods, 12)) for i in range(0, nperiods): chroma[i] = np.array([sum(a) / len(a) for a in zip(*chroma_transposed[pstart:pend])]) for slice in chroma_transposed[pstart:pend]: for key in range(0, 12): chroma[i][key] += slice[key] if slice[key] > THRESHOLD else 0 max = np.amax(chroma[i]) for key in range(0, 12): chroma[i][key] /= max pstart = pend+1 pend = pstart+nsamples # - keys = [{'name': 'C-maj', 'scale': ['A', 'B', 'C', 'D', 'E', 'F', 'G']}, {'name': 'G-maj', 'scale': ['A', 'B', 'C', 'D', 'E', 'F#/Gb', 'G']}, {'name': 'F-maj', 'scale': ['A', 'A#/Bb', 'C', 'D', 'E', 'F', 'G']}, {'name': 'D-maj', 'scale': ['A', 'B', 'C#/Db', 'D', 'E', 'F#/Gb', 'G']}, {'name': 'Bb-maj', 'scale': ['A', 'A#/Bb', 'C', 'D', 'D#/Eb', 'F', 'G']}, {'name': 'A-maj', 'scale': ['A', 'B', 'C#/Db', 'D', 'E', 'F#/Gb', 'G#/Ab']}, {'name': 'Eb-maj', 'scale': ['G#/Ab', 'A#/Bb', 'C', 'D', 'D#/Eb', 'F', 'G']}, {'name': 'E-maj', 'scale': ['E', 'F#/Gb', 'G#/Ab', 'A', 'B', 'C#/Db', 'D#/Eb']}, {'name': 'Ab-maj', 'scale': ['G#/Ab', 'A#/Bb', 'C', 'C#/Db', 'D#/Eb', 'F', 'G']}, {'name': 'B-maj', 'scale': ['B', 'C#/Db', 'D#/Eb', 'F', 'F#/Gb', 'G#/Ab', 'A#/Bb']}, {'name': 'C#-maj', 'scale': ['C#/Db', 'D#/Eb', 'F', 'F#/Gb', 'G#/Ab', 'A#/Bb', 'C']}, {'name': 'F#-maj', 'scale': ['F#/Gb', 'G#/Ab', 'A#/Bb', 'B', 'C#/Db', 'D#/Eb', 'F']} ] def NoteToNumber(note): if note == 'C': return 0 elif note == 'C#/Db': return 1 elif note == 'D': return 2 elif note == 'D#/Eb': return 3 elif note == 'E': return 4 elif note == 'F': return 5 elif note == 'F#/Gb': return 6 elif note == 'G': return 7 elif note == 'G#/Ab': return 8 elif note == 'A': return 9 elif note == 'A#/Bb': return 10 elif note == 'B': return 11 # + results = [] for slice in chroma: best_score = -100 best_key = {'name': 'C-maj', 'scale': ['A', 'B', 'C', 'D', 'E', 'F', 'G']} for key in keys: score = 0 for note in ['C', 'C#/Db', 'D', 'D#/Eb', 'E', 'F', 'F#/Gb', 'G', 'G#/Ab', 'A', 'A#/Bb', 'B']: intensity = slice[NoteToNumber(note)]*10 if note in key['scale']: score += intensity else: score -= intensity if score > best_score: best_score = score best_key = key print(best_key['name']) print(slice) results += {best_key['name']} Counter(results).most_common() # - e.keys() # + from tweetnlp import CMUTweetTagger as tagger from corpus import Tweet def classify(tweet): res = tagger.runtagger_parse([tweet])[0] tokens = [t for t,tag,score in res] tags= [tag for t, tag, score in res] t = Tweet(i, tweet, tokens, tags, "None") # features = extract_features(t) # if len(features)==0: # print zip(tokens, tags) # return "None" # print features.keys() features = feature_vectorizer.transform([t]) matrix = xgb.DMatrix(features) prediction = bst.predict(matrix) prediction = np.argmax(prediction) return id2label[prediction] # - anger = [c for c in corpuses if c.label =="Anger"][0] for i, tweet in enumerate(anger.tweets): if i>30: break print tweet.text print classify(tweet.text) print
5,745
/Rust(1987)/Rust.ipynb
f6d9c35a20d5ef25ed7a72ae038c983e3e643f8c
[]
no_license
hanloong7/DDC
https://github.com/hanloong7/DDC
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
101,523
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Conditional Choice Probability Estimators in 5 Easy Steps! # # ## Author: Eric Schulman # # The following guide demonstrates how to use conditional choice probability (CCP) estimators in Python. It was written in part as a homework for the University of Texas second year course in industrial organization. These estimators are the most common way to think about how the future influences decisions in industrial organization and related economic fields. # # To demonstrate how to use (and implement) a CCP estimator, we recover parameters for the cost function in [Rust 1987](https://www.jstor.org/stable/1911259). Rust's paper considers the decision of a bus manager. The bus manager decides whether or not to replace bus engines for his fleet of buses in Madison, Wisconsin. Replacing the engine has a high cost in the present but letting the engine accumulate mileage makes the bus more likely to break down in the future. Our goal is estimating parameters that tell us the importance of mileage when the bus manager decides to replace the engines. The bus manager's problem is very general and has become a universal example for dynamic decisions in industrial organization. # # We could use a logit to predict the bus manager's decisions. However, Rust found that a model where agents considered the future predicts bus engine replacement decisions more accurately. To think about the future, Rust simplified the bus manager's problem as a Markov decision process. In a Markov decision process, the bus manager only needs to consider how his decisions affect the bus engines' mileage next period. The bus manager controls the bus engines' mileage $x$ through $i$, a variable that determines whether he replaces the bus engine. He experiences unobserved costs $\epsilon$ each period. Given the bus manger's payoffs each period, $u(i,x, \epsilon ; \theta)$, we can write the bus manager's decision as a dynamic program. Rust calculated this value function inside a logit. Rust has code for solving this value function and estimating parameters in Gauss on his [website](https://editorialexpress.com/jrust/nfxp.html). # # # $$V(x) = \text{max}_{i} u(i,x, \epsilon; \theta) + \beta E[V(x') | x, i]$$ # # # The more popular approach to predicting dynamic choices is called conditional choice probability (CCP) estimation. This approach works similar to Rust's asymptotically. However, it simplifies Rust's algorithm. Instead of embedding a value function into a logit, we start with a simple estimate of the choice probabilities and adjust this estimate to account for the future. Our initial estimate of the choice probabilities will help us calculate the value function. With the value function, we will adjust the initial estimates of how an engine's mileage influences its replacement probability. This approach was first introduced to the literature in [Hotz Miller 1993](https://www.jstor.org/stable/2298122). The code and data in this guide are inspired by the Gauss code from Victor Aguirregabiria and Pedro Mira's [website](http://individual.utoronto.ca/vaguirre/wpapers/program_code_survey_joe_2008.html) accompanying their paper [Aguirregabiria Mira 2002](http://individual.utoronto.ca/vaguirre/wpapers/program_code_survey_joe_2008.html). # # # + import pandas as pd import math import numpy as np import statsmodels.api as sm import matplotlib.pyplot as plt from scipy.interpolate import interp1d #pre written interpolation function from statsmodels.base.model import GenericLikelihoodModel from scipy import stats #for kernel function # - # ## Step 1: Setup data and constants # # First, we must load the data into memory and pick a discount factor. The discount factor is the most important aspect of our CCP estimator and distinguishes it from a logit. Implicitly, our choice is an assumption about the bus manager because nothing in the data tells us about agent's discount factor (for more about this see [Magnac Thesmar 2002](https://www.jstor.org/stable/2692293)). All we see are mileage and replacement decisions. # + #constants/data set up BETA = .9999 GAMMA = .5772 #euler's constant #format the bus .dat from augirregabiria and mira's website data = np.fromfile('bus1234.dat') data = data.reshape(len(data)/6,6) data = pd.DataFrame(data,columns=['id','group','year','month','replace','miles']) #save to .csv data.to_csv('bus1234.csv') #divide by 1e6 (use the same scale are Rust and AM) data['miles'] = (data['miles'])/1e6 #switch to date time for ease data['date'] = pd.to_datetime(data[['year', 'month']].assign(Day=1)) data = data[['id','group','date','replace','miles']] #lag date date_lag = data.copy() date_lag['date'] = date_lag['date'] - pd.DateOffset(months=1) data = data.merge(date_lag, how='left', on=['id','group','date'] , suffixes=('','_next')) data = data.dropna() # - # ## Step 2: Setup the transition matrices # # The bus managers problem is dynamic. The bus' mileage $x$ will increase no matter what the bus manager does. We need to learn how the state will evolve. To make the problem easier, Rust thought about the bus manager's problem in terms of a discrete number of state variables. Using $K$ discrete states with the Markov property, we can think about the probability of the next state $x'$ given $x$ as a $K \times K$ matrix. The rows of the matrix refer to the current state $x$ and the columns are $x'$. # # ### Discretizing the state variable # First we discretize our continuous data on mileage into $K$ states to caclulate the value function for a given amount of mileage more easily. # + #size of step in discretization STEP = .002 #make states global variables STATES = np.arange(data['miles'].min(),data['miles'].max(), STEP) # - # ### Transition probabilities # Then we estimate $\pi_k$, the probability of 'jumping' $k$ states. For example, $\pi_1$ is the probability that $x$ increases by 1 state next period. We will learn $\pi_k$ using the Gaussian kernel. # + def miles_pdf(i_obs, x_obs, x_next): """estimation of mileage pdf following AM using the kernel function this corresponds to pdfdx in AM's code""" #figure out max number of steps dx = (1-i_obs)*(x_next - x_obs) + i_obs*x_next #number of 'transition' states dx_states = np.arange(dx.min(),dx.max() , STEP) #use kernel groups to make pdf kernel1 = stats.gaussian_kde(dx, bw_method='silverman') pdfdx = kernel1(dx_states) return np.array([pdfdx/pdfdx.sum()]).transpose() MILES_PDF = miles_pdf(data['replace'], data['miles'], data['miles_next']) # - # To make this more concrete, we graph these probabilities below. We see the state is most likely to increase by roughly 2000 miles. # + #set up plot of both value functions fig = plt.figure() #make a plot of both value functions plt.ylabel('Probability Density') plt.xlabel('Additional Mileage (divided by 1e6)') plt.plot(STATES[0:len(MILES_PDF)],MILES_PDF) plt.show() # - # ### Transition matrices # With the transition probabilities we can write two matrices that tell us how each bus's mileage $x$ will change depending on the engine replacement decision $i$. Let $F(i)$ be the transition matrix between states depending on the replacement decision $i$. Again, $\pi_k$ is the probability of transitioning $k$ states # # $$F(1) = \begin{bmatrix} # \pi_0 & \pi_1 & \pi_2 & ... & 0 \\ # \pi_0 & \pi_1 & \pi_2 & ... & 0 \\ # ... & ... & ... & ... & ... \\ # \end{bmatrix}$$ # # $$F(0) = \begin{bmatrix} # \pi_0 & \pi_1 & \pi_2 & ... & 0 \\ # 0 & \pi_0 & \pi_1 & ... & 0 \\ # ... & ... & ... & ... & ... \\ \end{bmatrix}$$ # # I implemented $F(1)$ first because it reverts every state back to state 0 and is easier to think about. # + def transition_1(i_obs, x_obs , x_next): """calculate transitions probabilities, non-parametrically this corresponds to fmat2 in AM's code""" #transitions when i=1 pdfdx = miles_pdf(i_obs, x_obs, x_next).transpose() #zero probability of transitioning to large states zeros = np.zeros( (len(STATES),len(STATES)-pdfdx.shape[1]) ) #transitioning to first state and 'jumping' dx states fmat1 = np.tile(pdfdx,(len(STATES),1)) fmat1 = np.concatenate( (fmat1, zeros), axis=1 ) return fmat1 FMAT1 = transition_1(data['replace'], data['miles'],data['miles_next']) # + def transition_0(i_obs, x_obs, x_next): """calculate transitions probabilities, non-parametrically this corresponds to fmat1 in AM's code""" pdfdx = miles_pdf(i_obs, x_obs, x_next).transpose() #initialize fmat array, transitions when i=0 end_zeros = np.zeros((1, len(STATES) - pdfdx.shape[1])) fmat0 = np.concatenate( (pdfdx, end_zeros), axis=1 ) for row in range(1, len(STATES)): #this corresponds to colz i think cutoff = ( len(STATES) - row - pdfdx.shape[1] ) #case 1 far enough from the 'end' of the matrix if cutoff >= 0: start_zeros = np.zeros((1,row)) end_zeros = np.zeros((1, len(STATES) - pdfdx.shape[1] - row)) fmat_new = np.concatenate( (start_zeros, pdfdx, end_zeros), axis=1 ) fmat0 = np.concatenate((fmat0, fmat_new)) #case 2, too far from the end and need to adjust probs else: pdf_adj = pdfdx[:,0:cutoff] pdf_adj = pdf_adj/pdf_adj.sum(axis=1) start_zeros = np.zeros((1,row)) fmat_new = np.concatenate( (start_zeros, pdf_adj), axis=1 ) fmat0 = np.concatenate((fmat0, fmat_new)) return fmat0 FMAT0 = transition_0(data['replace'],data['miles'],data['miles_next']) PR_TRANS = FMAT0, FMAT1 # - # ## Step 3: Calculate the value the function # # With the transition matrices, we can calculate the value function for our maximum likelihood estimation. Taking the value function into account predicts the bus manager's decision more accurately. We need to be careful though. The bus manager may make different decisions about two buses with the same mileage. We include an unobserved costs $\epsilon$ to ensure the model is theoretically consistent with the data. If our model fit the data poorly, this would mean that the bus manager has high 'unobserved' costs. # # To make the unobserved costs analytically tractable, we usually make 3 important assumptions about them # # 1. First we assume that $\epsilon$ is linearly additive with the utility function. # # $$u(i,x, \epsilon ;\theta) = u(i,x ;\theta) + \epsilon $$ # # 2. We assume that the shock $\epsilon$ follows an extreme value distribution. Why? To calculate the probability of replacing the engine we must compare $\epsilon_{1}$ to the other possible shock $\epsilon_{0}$. By focusing on extreme values, we only need to think about the likelihood of one shock i.e. the most extreme. # # 3. We also assume thse shocks effect decisions like random noise (conditional independence). In other words, the shocks do not systematically influence the mileage. # # $$p(x', \epsilon |x) = p(x'|x)p(\epsilon)$$ # # # ### Initial conditional choice probabilities # # Our approach to calculating the value function diverges from Rust's original methods. Rust's approach for calculating the value function involves repeatedly applying the Bellman operator to find its fixed point. This becomes time consuming when nested inside a maximum likelihood routine. Our method will be quicker and still fits the data more accurately than a regular logit. We start by using a logit to predict the probability of engine replacement decision $i$ conditional on mileage $x$. # # Our result will be a $K \times 1$ vector with the probability of replacing the engine conditional on the mileage $x$. We will learn this vector, $P$, using a logit just like Aguirregabiria and Mira. We will use these choice probabilities to calculate the value function without repeatedly iterating. In principle, we could experiment with other consistent methods. # + def initial_pr(i_obs, x_obs, d=0): """initial the probability of view a given state following AM. just involves logit to predict Third arguement involves display""" X = np.array([x_obs, x_obs**2, x_obs**3]).transpose() X = sm.add_constant(X) model = sm.Logit(i_obs,X) fit = model.fit(disp=d) if d: print(fit.summary()) x_states = np.array([STATES, STATES**2, STATES**3]).transpose() x_states = sm.add_constant(x_states) return fit.predict(x_states) PR_OBS = initial_pr(data['replace'], data['miles'], d=1) # - # ### Hotz Miller's 'alternative' value function calculation # # Using the payoff function (whose parameters we want to learn), the transition matrices, and the initial choice probabilities (which we estimated) we can now calculate the bus manager's value function using the following formula. # # $$V(x;\theta) = [I_m - \beta( (1-P) * F(0) + P * F(1)) ]^{-1} [(1-P)*(u(0,x;\theta) + \gamma -ln(1-P)) + P*( u(1,x;\theta) + \gamma -ln(P) ) ]$$ # # Hotz Miller derived this formula in their paper. This corresponds to equation (8) in Aguirregabiria Mira 2002. # # For the purposes of clarifying the formula to see how to implement it. # * $*$ is the Hadamard product (i.e. element wise). # * $u(i,x;\theta)$ is the payoff function. Remeber $\epsilon$ is linearly seperable from $u(i,x;\theta)$. I implemented this function using a Python `lambda` expression so that the cost specification is flexible. This means that the routine for calculating the value function takes the cost function (and its parameters) as an argument. # # * Finally, note there is a slight abuse of notation going on. The dimensions of $F$ are $K \times K$ so I needed to tile the vector $P$ in order to take the element wise product. def hm_value(params, cost, pr_obs, pr_trans): """calculate value function using hotz miller approach""" #set up matrices, transition is deterministic trans0, trans1 = pr_trans #calculate value function for all state pr_tile = np.tile( pr_obs.reshape( len(STATES) ,1), (1, len(STATES) )) denom = (np.identity( len(STATES) ) - BETA*(1-pr_tile)*trans0 - BETA*pr_tile*trans1) numer = ( (1-pr_obs)*(cost(params, STATES, 0) + GAMMA - np.log(1-pr_obs)) + pr_obs*(cost(params, STATES, 1) + GAMMA - np.log(pr_obs) ) ) value = np.linalg.inv(denom).dot(numer) return value # ## Step 4: (Psuedo) Maximum likelihood estimation # # With the value function we just calculated, we can adjust the likelihood of replacing the engine at mileage $x$ with the following formula. It is very similar to the logit we used before and produces a $K\times1$ vector with a probability of replacement in each state. Now, the replacement probability also depends on $\beta$ and future $x$'s. # # $$\psi(P ; \theta) = \dfrac{exp[u(1,x;\theta) + \beta F(1) V(x;\theta)] }{exp[u(1,x;\theta) + \beta F(1) V(x;\theta)] + exp[u(0,x;\theta) + \beta F(0) V(x;\theta)] }$$ # # This corresponds to the $\Psi$ function in Aguirregabiria Mira 2002. They parametrize $\Psi$ using the extreme value distribution right below Proposition 3 in their paper. # # Ultimately, we wanted to learn how mileage influences the bus manager's decision by estimating the parameters in the cost function $\theta$. To learn these parameters, we can maximize the value of this likelihood $\psi$ 'adjusted' for the value function we estimated. This differs from regular maximum likelihood estimation. This approach 'cheated' by estimating the value function ahead of time to make the routine run faster. By cheating we loose precision (efficiency) in our estimates. def hm_prob(params, cost, pr_obs, pr_trans): """calculate psi (i.e. CCP likelihood) using the value function from the hotz miller appraoch""" value = hm_value(params, cost, pr_obs, pr_trans) value = value - value.min() #subtract out smallest value trans0, trans1 = pr_trans delta1 = np.exp( cost(params, STATES, 1) + BETA*trans1.dot(value)) delta0 = np.exp( cost(params, STATES, 0) + BETA*trans0.dot(value) ) return delta1/(delta1+delta0) class CCP(GenericLikelihoodModel): """class for estimating the values of R and theta using the CCP routine and the helper functions above""" def __init__(self, i, x, x_next, params, cost, **kwds): """initialize the class i - replacement decisions x - miles x_next - next periods miles params - names for cost function parameters cost - cost function specification, takes agruements (params, x, i) """ super(CCP, self).__init__(i, x, **kwds) #data self.endog = i #these names don't work exactly self.exog = x #the idea is that x is mean indep of epsilon self.x_next = x_next #transitions self.pr_obs = initial_pr(i, x) self.trans = transition_0(i,x,x_next), transition_1(i,x,x_next) #initial model fit self.cost = cost self.num_params = len(params) self.data.xnames = params self.results = self.fit( start_params=np.ones(self.num_params) ) def nloglikeobs(self, params, v=False): """psuedo log likelihood function for the CCP estimator""" # Input our data into the model i = self.endog x = (self.exog/STEP).astype(int)*STEP #discretized x #set up hm state pr prob = hm_prob(params, self.cost, self.pr_obs, self.trans).transpose() prob = interp1d(STATES, prob) prob = prob(x) log_likelihood = (1-i)*np.log(1-prob) + i*np.log(prob) return -log_likelihood.sum() def iterate(self, numiter): """iterate the Hotz Miller estimation procedure 'numiter' times""" i = 0 while(i < numiter): #update pr_obs based on parameters self.pr_obs = hm_prob(self.results.params, self.cost, self.pr_obs, self.trans) #refit the model self.results = self.fit(start_params=np.ones(self.num_params)) i = i +1 # ### Estimation with linear Costs # # First we estimate the cost of additional mileage and the cost of replacing a new engine using a linear cost function specification. This means that the cost associated with driving a bus is directly proportional to its mileage. # # $$u(i,x;\theta) = (1-i)(\theta_1 x) + i(RC) $$ # # Aguirregabiria and Mira found the 'true' maximum likelihood estimates for these parameters in their paper by iteratively calculating the value function. I included these estimates below for reference. Our CCP estimator preforms relatively well in comparison. # # |Parameter| MLE Estimate | # |--|--| # | $\theta_1$|-.58 | # |$RC$| -10.47| # + #define cost functon using lambda expression linear_cost = lambda params, x, i: (1-i)*x*params[i] + i*params[i] linear_model = CCP(data['replace'], data['miles'], data['miles_next'], ['theta1','RC'], linear_cost) print(linear_model.results.summary()) # - # ## Step 5: Iterating the model # # Although our CCP estimator is close to the 'true' maximum likelihood estimates, we can do better! Before we calculated $\psi(P;\theta)$ by estimating the probability of replacing the engine using a logit. The key to calculating $\psi( \cdot ;\theta)$ is $P$. Our approach to estimating $P$ is very simple. So, you may wonder what happens if we use a better starting estimate of the choice probabilities? If we use a more precise $P$, Aguirregabiria and Mira showed in their 2002 paper that $\psi(P;\theta)$ will improve! # # Using better starting value like $\psi(P;\hat{\theta})$ guarantees that $\psi( \psi(P;\hat{\theta}); \theta)$ will be more precise. What's more, nothing is stopping us from repeating this process. After calculating $\psi( \psi(P;\hat{\theta}); \theta)$, we can plug your estimate back into $\psi( \cdot ; \theta)$. In fact, if we keep iterating then we will converge to 'true' likelihood function estimates. # # To demonstrate how this works, we iterate the CCP estimation procedure by repeatedly plugging the previous estimate of $\psi( \cdot ; \theta)$ into the likelihood function. # + #save params from 1 iteration for later linear_params0, linear_pr0 = linear_model.results.params, linear_model.pr_obs linear_model.iterate(2) print(linear_model.results.summary()) # - # ### Comparing the CCP estimator # For the sake of comparison, we graph the choice probabilities predicted by the logit against the CCP estimator. Including the cubic term in the logit most likely overfits the data. The probability of replacing is very high for out of sample observations. This may not be realistic. # # Additionally, we see the improved CCP estimator after 2 iterations side by side with our initial estimates. Iterating seems to help with observations out of sample. # + #save first iteration params for later p_linear0 = hm_prob(linear_params0, linear_model.cost, linear_pr0, linear_model.trans) p_linear = hm_prob(linear_model.results.params, linear_model.cost, linear_model.pr_obs, linear_model.trans) #make a plot of both value functions plt.ylabel('Engine Replacment Probability') plt.xlabel('Mileage (divided by 10e6)') plt.plot(STATES, PR_OBS, label='Logit') plt.plot(STATES,p_linear0,label='Linear (0 iterations)') plt.plot(STATES, p_linear, label='Linear (2 iterations)') plt.legend() plt.show() # - # ### Comparing cost functions # # We might wonder what happens if we change our cost function. In fact, Rust gave many cost function specifications. Rust wanted to demonstrate that a model where the Bus manager considered the future using a dynamic program fit the data better. One additional specification involves quadratic costs. # # $$u(i,x;\theta) = (1-i)(\theta_1 x + \theta_2 x^2) + i(RC) $$ quad_cost = lambda params, x, i: (1-i)*(params[0]*x + params[1]*x**2) + i*params[2] quad_model = CCP(data['replace'], data['miles'], data['miles_next'], ['theta1','theta2', 'R'], quad_cost) quad_model.iterate(2) print(quad_model.results.summary()) # When we compare the replacement probabilities under both specifications side by side, we see that the different specification does not drastically change the bus manager's decision at a given mileage. They are much closer than the linear function compared with the logit. Considering the limited data, we probably cannot learn the actual functional form for the cost function. # + #set up plot of both value functions fig = plt.figure() p_quad = hm_prob(quad_model.results.params, quad_model.cost, quad_model.pr_obs, quad_model.trans) #make a plot of both value functions plt.ylabel('Engine Replacment Probability') plt.xlabel('Mileage (divided by 10e6)') plt.plot(STATES, PR_OBS, label='Logit') plt.plot(STATES,p_linear,label='Linear (2 Iterations)') plt.plot(STATES, p_quad, label='Quadratic (2 Iterations)') plt.legend() plt.show() # - # ## Works Cited # # * Aguirregabiria, V., & Mira, P. (2002). Swapping the Nested Fixed Point Algorithm: A Class of Estimators for Discrete Markov Decision Models. Econometrica, 70(4), 1519-1543. Retrieved from http://www.jstor.org/stable/3082006 # # * Hotz, V., & Miller, R. (1993). Conditional Choice Probabilities and the Estimation of Dynamic Models. The Review of Economic Studies, 60(3), 497-529. Retrieved from http://www.jstor.org/stable/2298122 # # * Magnac, T., & Thesmar, D. (2002). Identifying Dynamic Discrete Decision Processes. Econometrica, 70(2), 801-816. Retrieved from http://www.jstor.org/stable/2692293 # # * Rust, J. (1987). Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher. Econometrica, 55(5), 999-1033. Retrieved from https://www.jstor.org/stable/1911259 #
24,097
/02-Python1/02-Python-1-Data.ipynb
84b05ea78e2c19d39b6fd9326e141ac735379f5f
[]
no_license
cpmcey/2016-01-11_Sheffield_Notes
https://github.com/cpmcey/2016-01-11_Sheffield_Notes
0
0
null
2016-01-12T09:54:28
2016-01-11T23:07:20
null
Jupyter Notebook
false
false
.py
78,279
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt import math import seaborn as sns import scipy.stats as stats from sklearn import preprocessing from matplotlib.mlab import PCA as mlabPCA from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from matplotlib.cm import register_cmap from scipy import stats from sklearn.decomposition import PCA as sklearnPCA # S&P 500 companies' fundamental data. df = pd.read_csv('fundamentals.csv') df.dtypes # + # Convert Period Ending column to date format and create year column. df['Period Ending'] = pd.to_datetime(df['Period Ending']) df['year'] = df['Period Ending'].dt.year # Show number of stocks by year. df.groupby('year').count()['Ticker Symbol'] # - # Create a subset. df_sub = df.loc[(df['year']==2015)|(df['year']==2014), ['Ticker Symbol', 'Capital Expenditures', 'Earnings Before Interest and Tax', 'Gross Profit', 'Profit Margin', 'Total Assets', 'Total Liabilities', 'year']] df_sub.head() # Show univariate distributions of the variables. g = sns.PairGrid(df_sub.dropna(), diag_sharey=False) g.map_upper(plt.scatter, alpha=.5) g.map_lower(sns.regplot, scatter_kws=dict(alpha=0)) g.map_diag(sns.kdeplot, lw=3) plt.show() # Create correlation matrix. corrmat = df_sub.corr() f, ax = plt.subplots(figsize=(12, 9)) sns.heatmap(corrmat, vmax=.8, square=True, annot=True) plt.show() # + # Craete boxplots to compare the six variables. fig = plt.figure(figsize=(20, 15)) ax1 = fig.add_subplot(321) ax2 = fig.add_subplot(322) ax3 = fig.add_subplot(323) ax4 = fig.add_subplot(324) ax5 = fig.add_subplot(325) ax6 = fig.add_subplot(326) cap = sns.boxplot(x='year', y='Capital Expenditures', palette='muted', data=df_sub, ax=ax1) ebit = sns.boxplot(x='year', y='Earnings Before Interest and Tax', palette='muted', data=df_sub, ax=ax2) gp = sns.boxplot(x='year', y='Gross Profit', palette='muted', data=df_sub, ax=ax3) pm = sns.boxplot(x='year', y='Profit Margin', palette='muted', data=df_sub, ax=ax4) ta = sns.boxplot(x='year', y='Total Assets', palette='muted', data=df_sub, ax=ax5) tl = sns.boxplot(x='year', y='Total Liabilities', palette='muted', data=df_sub, ax=ax6) plt.show() # + for col in df_sub.loc[:,'Capital Expenditures':'Total Liabilities'].columns: print(col) print(stats.ttest_ind( df_sub[df_sub['year'] == 2014][col].dropna(), df_sub[df_sub['year'] == 2015][col].dropna() )) # No significance between the two years. # - # Create dataframe to hold features. features = pd.get_dummies(df['year']) features['Recent'] = np.where((df['year'].isin([2014, 2015, 2016])), 1, 0) print(pd.crosstab(features['Recent'], df['year'])) # Create a categorical variable based om profit margin. features['high_margin'] = np.where(df['Profit Margin']>=10, 1, 0) print(df['Profit Margin'].groupby(features['high_margin']).describe()) # Find the highest correlated variables to gross profit. corrmat2 = df.corr() print(corrmat2['Gross Profit'][(corrmat2['Gross Profit']>.8) & (corrmat2['Gross Profit']!=1)]) # Combine highly correlated variables. means = df[['Gross Profit','Earnings Before Interest and Tax','Operating Income', 'Total Revenue']].mean(axis=0) stds = df[['Gross Profit','Earnings Before Interest and Tax','Operating Income', 'Total Revenue']].std(axis=0) features['earnings'] = ((df[['Gross Profit','Earnings Before Interest and Tax','Operating Income', 'Total Revenue']] - means) / stds).mean(axis=1) plotdf = df.loc[:, ['Gross Profit','Earnings Before Interest and Tax','Operating Income', 'Total Revenue']] plotdf['earnings'] = features['earnings'] corrmat3 = plotdf.corr() print(corrmat3) # + # Show variable transformation. fig = plt.figure(figsize=(20, 15)) ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222) ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224) tr1=sns.distplot(df['Total Revenue'].dropna(), hist=True, ax=ax1) ax1.set_title('Raw') tr2=sns.distplot(np.log(df['Total Revenue'].dropna()), hist=True, ax=ax2) ax2.set_title('Log') tr3=sns.distplot(np.log(np.sqrt(df['Total Revenue'].dropna())), hist=True, ax=ax3) ax3.set_title('Square Root') tr4=sns.distplot(1/df['Total Revenue'].dropna(), hist=True, ax=ax4) ax4.set_title('Inverse') plt.show() # Create features from some of the transformation. features['log_total_revenue'] = np.log(df['Total Revenue']) features['square_root_total_revenue'] = np.log(np.sqrt(df['Total Revenue'].dropna())) # + # Show non-linearity between variables. sns.regplot( df['Gross Profit'], y=df['Profit Margin'], y_jitter=.49, order=2, scatter_kws={'alpha':0.3}, line_kws={'color':'black'}, ci=None ) plt.show() features['profit_margin_squared'] = df['Profit Margin'] * df['Profit Margin'] features['cash_ratio_squared'] = df['Cash Ratio'] * df['Cash Ratio'] # + # Normalize variables to same scale. df_num = df.select_dtypes(include=[np.number]).dropna() names=df_num.columns df_scaled = pd.DataFrame(preprocessing.scale(df_num), columns=names) plt.scatter(df_num['Profit Margin'], df_scaled['Profit Margin']) plt.show() print(df_scaled.describe()) # + # Show interaction between some variables. features['profitability'] = np.where(df['Profit Margin'] >= 10, 1, 0) features['earnings_profitability'] = features['earnings'] * features['profitability'] features['operating_margin'] = df['Operating Margin'] sns.lmplot( x='earnings', y='operating_margin', hue='profitability', data=features, scatter=False ) plt.show() # + # Show variables weakly correlated to Gross Profit. print(corrmat2['Gross Profit'][(corrmat2['Gross Profit']<.7) & (corrmat2['Gross Profit']>.2) & (corrmat2['Gross Profit']!=1)]) # Create a PCA subset df_pca = df.loc[(df['year']==2015)|(df['year']==2014), ['Gross Profit', 'Accounts Payable', 'Fixed Assets', 'Long-Term Debt', 'Total Assets', 'Total Liabilities']].dropna() # + # Compare Gross Profit and Accounts Payable. t1 = sns.regplot('Gross Profit','Accounts Payable',df_pca,fit_reg=False) t1.axhline(0, color='k', linestyle='-', linewidth=2) t1.axvline(0, color='k', linestyle='-', linewidth=2) t1.axes.set_title('Raw data') plt.show() # Standardize variables. df_pca['Gross Profit_z'] = (df_pca['Gross Profit'] - df_pca['Gross Profit'].mean()) / df_pca['Gross Profit'].std() df_pca['Accounts Payable_z'] = (df_pca['Accounts Payable'] - df_pca['Accounts Payable'].mean()) / df_pca['Accounts Payable'].std() t2 = sns.regplot('Gross Profit_z','Accounts Payable_z',df_pca,fit_reg=False) t2.axhline(0, color='k', linestyle='-', linewidth=2) t2.axvline(0, color='k', linestyle='-', linewidth=2) t2.axes.set_title('Standardized data') plt.show() # + df_pca['Gross Profit_z_rot'] = math.cos(20) * df_pca['Gross Profit_z'] - math.sin(20) * df_pca['Accounts Payable_z'] df_pca['Accounts Payable_z_rot'] = math.sin(20) * df_pca['Gross Profit_z'] + math.cos(20) * df_pca['Accounts Payable_z'] t3 = sns.regplot('Gross Profit_z_rot','Accounts Payable_z_rot',df_pca,fit_reg=False) t3.axhline(0, color='k', linestyle='-', linewidth=2) t3.axvline(0, color='k', linestyle='-', linewidth=2) t3.axes.set_title('Rotated standardized data') plt.show() # + # Conduct PCA on entire dataset. # Drop non-numeric variables and rows with missing values. df1 = df.drop(['Unnamed: 0', 'Period Ending'], axis=1).dropna() # Extract Ticker Symbol column. cols = df1.columns.tolist() cols.insert(0, cols.pop(cols.index('Ticker Symbol'))) df1 = df1.reindex(columns= cols) df2 = df1.iloc[:,1:].values df3 = pd.DataFrame(data=df1.iloc[:,0].values) # Standardize data and create covariance matrix. X_std = StandardScaler().fit_transform(df2) print(np.cov(X_std.T)) # - # Perform eigendecomposition: derive eigenvectors and eigenvalues. cov_mat = np.cov(X_std.T) eig_vals, eig_vecs = np.linalg.eig(cov_mat) print('Eigenvectors \n%s' %eig_vecs) print('\nEigenvalues \n%s' %eig_vals) # List eigenvalues in descending order. eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))] print('Eigenvalues in descending order:') for i in eig_pairs: print(i[0]) # + # Plot cumulative explained variance to determine how many principal components to choose for the new feature subspace. pca = PCA().fit(X_std) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance') plt.show() # 90% of the variance comes from 22 principal components; therefore, pick the top 22 eigenvalues from above. # - # Construct the new feature subspace. sklearn_pca = PCA(n_components=22) Y_sklearn = sklearn_pca.fit_transform(X_std) print(Y_sklearn) Y_sklearn.shape # The 77 variables were reduced to 22 principal components at 90% variance. # Construct the new dataframe with the principal components. df4 = pd.DataFrame(data = Y_sklearn, columns = ['pc1','pc2','pc3','pc4','pc5','pc6','pc7','pc8','pc9','pc10','pc11','pc12','pc13','pc14','pc15','pc16','pc17','pc18','pc19','pc20','pc21','pc22']) df5 = pd.concat([df3, df4], axis = 1) df5.head() -text">friends</code> and <code class="language-text">others</code> as <code class="language-text">groupType</code>s. Groups may change over time.</td> # </tr> # <tr> # <td>listed</td> # <td>A grouped response of lists that contain this venue. Contains a <code class="language-text">summary</code> string representing the acting user's relationship to these lists. If an acting user is present, groups may include <code class="language-text">todos</code>, <code class="language-text">created</code>, <code class="language-text">edited</code>, <code class="language-text">followed</code>, <code class="language-text">friends</code>, and <code class="language-text">others</code>. If this venue is on the acting user's todo list, those items will be included in the <code class="language-text">todos</code> group.</td> # </tr> # <tr> # <td>beenHere</td> # <td>Contains <code class="language-text">count</code> of the number of times the acting user has been here. Absent if there is no acting user.</td> # </tr> # <tr> # <td>shortUrl</td> # <td>A short URL for this venue, e.g. <a href="http://4sq.com/Ab123D">http://4sq.com/Ab123D</a></td> # </tr> # <tr> # <td>canonicalUrl</td> # <td>The canonical URL for this venue, e.g. <a href="https://foursquare.com/v/foursquare-hq/4ab7e57cf964a5205f7b20e3">https://foursquare.com/v/foursquare-hq/4ab7e57cf964a5205f7b20e3</a></td> # </tr> # <tr> # <td>photos</td> # <td>A <code class="language-text">count</code> and <code class="language-text">groups</code> of <a href="/docs/venues/photos">photos</a> for this venue. Group types are <code class="language-text">checkin</code> and <code class="language-text">venue</code>. Not all items will be present.</td> # </tr> # <tr> # <td>likes</td> # <td>The <code class="language-text">count</code> of users who have liked this venue, and <code class="language-text">groups</code> containing any <code class="language-text">friends</code> and <code class="language-text">others</code> who have liked it. The groups included are subject to change.</td> # </tr> # <tr> # <td>like</td> # <td>Indicates if the current user has liked this venue.</td> # </tr> # <tr> # <td>dislike</td> # <td>Indicates if the current user has disliked this venue.</td> # </tr> # <tr> # <td>phrases</td> # <td>List of phrases commonly seen in this venue's tips, as well as a sample tip snippet and the number of tips this phrase appears in.</td> # </tr> # <tr> # <td>attributes</td> # <td>Attributes associated with the venue, such as price tier, whether the venue takes reservations, and parking availability.</td> # </tr> # <tr> # <td>roles</td> # <td>Present if and only if the current user has at least one assigned role for this venue. The value is a list of all of the current user's assigned roles for this venue. Possible values for each element of the list are <code class="language-text">manager</code> and <code class="language-text">employee</code>. Subject to change as additional roles may be defined.</td> # </tr> # <tr> # <td>page</td> # <td><code class="language-text">user</code> is the branded page associated with the venue. If the venue is part of a chain, this will be a user object referring to the chain. For venues that are being managed and not part of a chain, this will contain a user object that uniquely refers to this venue.</td> # </tr> # <tr> # <td>bestPhoto</td> # <td>Photo we have determined to be the best photo for the venue based on user upvotes and our internal algorithm.</td> # </tr> # </tbody> # </table> # All the details would be placed into its specific dataframe and then would be processed through clustering. Then, the results would be visualized through a map. # *** # ## References # [1] The International (Dota 2). In Wikipedia. https://en.wikipedia.org/wiki/The_International_(Dota_2) Retrieved May 9, 2020. # # [2] The International 2019. Esport Charts. https://escharts.com/tournaments/dota2/international-2019. Retrieved May 9, 2020. # # [3] Ericsson Globe. In Wikipedia. https://en.wikipedia.org/wiki/Ericsson_Globe. Retrieved May 9, 2020. # # [4] Dota Team. The International[Blog Post]. http://blog.dota2.com/2020/04/the-international-2/. Retrived May 9, 2020. # # [5] Esports Join the Big Leagues. Goldman Sachs. https://www.goldmansachs.com/insights/pages/infographics/e-sports/index.html?cid=sch-pd-bing-esportshub-searchad-201810-----&mkwid=9b1dvEvt. Retrieved May 9, 2020.
13,664
/boston_housing.ipynb
6ea670f8f2b3741b970d1a33ee46ef82953461a8
[]
no_license
nailafadhilah/boston-housing-breast-cancer
https://github.com/nailafadhilah/boston-housing-breast-cancer
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
522,080
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Part 1: Create Linear Regression # To Do: Import data processing packages import numpy as np import pandas as pd import random as rnd # To Do: Import visualization packages import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline # To Do: Import scikit-learn packages from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split # To Do: Load Datasets from sklearn import datasets # To Do: Load dasasets from scikit learn dataset library boston = datasets.load_boston() # To Do: Print data description using built-in function from scikit-learn print(boston.DESCR) # To Do: Define the data/predictors as the pre-set feature names features = pd.DataFrame(boston.data, columns=boston.feature_names) # To Do: Print features head list features.head() # To Do: Print object type of features print(features.dtypes) # To Do: Put the target (housing value -- MEDV) in another DataFrame target = pd.DataFrame(boston.target, columns=['MEDV']) # To Do: Print target head list target.head() # To Do: Print object type of features print(target.dtypes) # To Do: Create new object for training X = features y = target # To Do: Create linear regression model model = LinearRegression() # To Do: Fit the data to the model model.fit(X,y) # To Do: Calculate model score model.score(X,y) # To Do: Calculate b0 model.intercept_ # To Do: Calculate bn for n= number of features model.coef_ # To Do: Calculate predictions using regressor data as input, and print five first result predictions = model.predict(X) print(predictions[0:5]) # ## Part 2: Split Data for Train and Test # To Do: Split data for training and testing X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # To Do: Make sure the X_train data shape X_train.shape # To Do: Make sure the y_train data shape y_train.shape # To Do: Make sure the X_test data shape X_test.shape # To Do: Make sure the y_test data shape y_test.shape # To Do: Create regressor and fit data train regressor = LinearRegression() regressor.fit(X_train, y_train) # To Do: Calculate model score for data train train_score = regressor.score(X_train, y_train) print('data train - coefficient of determination:', train_score) # To Do: Calculate model score for data test # 20% data dari testing gak masuk ke train test_score = regressor.score(X_test, y_test) print('data test - coefficient of determination:', test_score) # ## Part 3: Combine Data # To Do: Create new dataframe containing features and target combine = features.copy() combine.head() # To Do: Combine both datasets combine['MEDV'] = target combine.head() # To Do: Make sure the combine data shape combine.shape # To Do: Using pandas function, describe combine dataframe combine.describe() # To Do: Make sure there are no null value combine.isnull().any() # ## Part 4: Data Visualization # To Do: Inspect MEDV histogram target.hist() # To Do: Inspect CRIM histogram features['CRIM'].hist() # To Do: Inspect LSTAT histogram features['LSTAT'].hist() # # Part 5: Multiple Linear Regression Model # Create multiple linear regression model using only three features from features dataframe to predict MEDV. # The three features must be chosen based on analysis. # Utilize pandas and seaborn to process and visualize data. # Aim for the highest score as possible as you can. # # Part 6 : Answer # Dengan melakukan analisis pada fitur-fitur untuk penjualan rumah di Boston, didapatkan 3 fitur yang memiliki relasi paling dekat terhadap penentuan nilai jual rumah tersebut. Fitur-fitur tersebut adalah: # # <h3>1. RM</h3> # 'RM' merupakan jumlah rata-rata kamar dari rumah-rumah di Boston. # # Alasan pemilihan: # - Untuk nilai RM yang semakin tinggi, harga jual (MEDV) semakin tinggi. (korelasi positif) # - Karena jumlah kamar yang semakin banyak akan berbanding lurus dengan luas rumah yang semakin besar, sehingga wajar jika harga rumah bertambah. # # <h3>2. PTRATIO</h3> # 'PTRATIO' merupakan rasio siswa dengan guru di sekolah dasar dan menengah yang tinggal di Boston. # # Alasan pemilihan: # - Untuk nilai PTRATIO yang semakin tinggi, harga jual (MEDV) semakin rendah. (korelasi negatif) # - Lingkungan sosial di daerah yang didominasi oleh warga "kelas bawah" mungkin tidak kondusif bagi anak-anak. Mungkin juga relatif tidak aman dibandingkan dengan daerah yang didominasi oleh warga "kelas atas". Oleh karena itu, daerah dengan lebih banyak warga "kelas bawah" akan menurunkan permintaan, sehingga menurunkan harga. # # <h3>3. LSTAT</h3> # 'LSTAT' merupakan persentase pemilik rumah di Boston yang dianggap "kelas bawah" (pekerja miskin). # # Alasan pemilihan: # - Untuk nilai LSTAT yang semakin tinggi, harga jual (MEDV) semakin rendah. (korelasi negatif) # - Persentase pemilik rumah dari kalangan miskin akan berbanding lurus dengan kepadatan, kesehatan lingkungan, kebersihan, jumlah kriminalitas, dan banyaknya sekolah dengan kualitas baik di sekitar perumahan. # # Jika dilihat dari correlation_matrix, yang memiliki nilai korelasi paling tinggi dengan target adalah LSTAT = -0.74, RM = 0.70, dan PTRATIO = -0.51 combine.corr().round(2) # plt.figure(figsize=(10,10)) # correlation_matrix = combine.corr().round(2) # # annot = True to print the values inside the square # sns.heatmap(data=correlation_matrix, annot=True) # Model Linear Regression Pertama # Memilih fitur data yang akan digunakan df = features[['RM','PTRATIO','LSTAT']] df.head() # Define target yang akan digunakan (tetap sama dengan tugas sebelumnya) target.head() # Create new object for training X = df y = target # Create new linear regression model model = LinearRegression() # Fit the data to the model model.fit(X,y) # Calculate model score model.score(X,y) # Calculate b0 model.intercept_ # Calculate bn for n= number of features model.coef_ # Calculate predictions using regressor data as input, and print five first result predictions = model.predict(X) print(predictions[0:5]) # ## 2. Memilah data untuk Train dan Test # To Do: Split data for training and testing X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # To Do: Make sure the X_train data shape X_train.shape # To Do: Make sure the y_train data shape y_train.shape # To Do: Make sure the X_test data shape X_test.shape # To Do: Make sure the y_test data shape y_test.shape # To Do: Create regressor and fit data train regressor = LinearRegression() regressor.fit(X_train, y_train) # To Do: Calculate model score for data train train_score = regressor.score(X_train, y_train) print('data train - coefficient of determination:', train_score) # To Do: Calculate model score for data test # 20% data dari testing gak masuk ke train test_score = regressor.score(X_test, y_test) print('data test - coefficient of determination:', test_score) # ## 3. Menyatukan data fitur yang digunakan dan hasil target # To Do: Create new dataframe containing features and target combine = df.copy() combine.head() # To Do: Combine both datasets combine['MEDV'] = target combine.head() # To Do: Make sure the combine data shape combine.shape # To Do: Using pandas function, describe combine dataframe combine.describe() # ## 4. Visualisasi Data MLR Model # To Do: Inspect MEDV histogram target.hist() # To Do: Inspect CRIM histogram df.hist() # The Pairwise Correlations combine.corr() # + # Menambah libraries baru untuk bantuan visualisasi dengan Seaborn # Using Pearson Correlation plt.figure(figsize=(10,10)) cor = combine.corr() ax = sns.heatmap(cor, annot=True, cmap="YlGnBu", square="True") bottom, top = ax.get_ylim() ax.set_ylim(bottom + 0.5, top - 0.5) plt.show() # - # Seaborn pair-plot sns.pairplot(combine, plot_kws = {'alpha': 0.6}, diag_kws = {'bins': 30}) # + # Untuk membuktikan hubungan antara ketiga fitur yang dipilih, # digunakan modeling MADV sebagai fungsi dari RM, PTRATIO, dan LSTAT. fig, ax = plt.subplots(1, 3) fig.set_size_inches(18.5, 10.5) sns.regplot('RM', 'MEDV', combine, ax=ax[0], scatter_kws={'alpha': 0.4}) sns.regplot('PTRATIO', 'MEDV', combine, ax=ax[2], scatter_kws={'alpha': 0.4}) sns.regplot('LSTAT', 'MEDV', combine, ax=ax[1], scatter_kws={'alpha': 0.4}) "]), 2 ) practice_string = ( summary_table_out.num_practices.map(str) + " (" + summary_table_out.percent_practice.map(str) + "%)" ) practice_string[np.isnan(summary_table_out.percent_practice)] = "-" summary_table_out["practice_string"] = practice_string final_table = summary_table_out.join(medians_table)[column_names.keys()] final_table_display = final_table.rename(columns=column_names, index=title_mapping) display(final_table_display) # - # ## Summary display( Markdown( f"Total number of hazardous prescribing events throughout the study period: {int(summary['total_events'])}" ), Markdown( f"Number of unique patients at risk of a hazardous prescribing event throughout the study period: {summary['total_patients_denominator']}" ), Markdown( f"Number of unique patients experiencing a hazardous prescribing event throughout the study period: {summary['total_patients']}" ), Markdown( f"Number of practices with at least one hazardous prescribing event throughout the study period: {practice_count_event} ({round((practice_count_event/practice_count), 2) * 100}%)" ), ) # Relevant information for each indicator is summarised in the table below. generate_summary_table() # <a id="gi_bleed"></a> # ## Gastrointestinal (GI) Bleed Indicators # display( Markdown( f"Number of practices with at least one hazardous prescribing event throughout the study period: {summary['gi_bleed']['num_practices']}" ), Markdown( f"Number of unique patients at risk of a hazardous prescribing event throughout the study period: {summary['gi_bleed']['patients_denominator']:,}" ), Markdown( f"Number of unique patients experiencing at least one hazardous prescribing event throughout the study period: {summary['gi_bleed']['patients_numerator']:,}" ), Markdown( f"Number of hazardous prescribing events throughout the study period: {num_gi_bleed_events:,}" ), ) # <a id="a"></a> # ### Age 65+, on oral NSAID without gastroprotection (GI_P3A) # # Prescription of an oral NSAID in the previous 3 months to patients aged 65 or above who have been co-prescribed an ulcer healing drug in the previous 3 months. show_summary("a") show_image("../output/figures/plot_a.jpeg") # <a id="b"></a> # ### H/O peptic ulcer, on oral NSAID without gastroprotection (GI_P3B) # # Prescription of an oral NSAID in the previous 3 months to patients with a history of peptic ulceration/gastric bleed. show_summary("b") show_image("../output/figures/plot_b.jpeg") # <a id="c"></a> # ### H/O peptic ulcer, on OAC without gastroprotection (GI_P3C) # # Prescription of an aniplatelet drug in the previous 3 months in patients with a history of peptic ulceration/gatric bleed. show_summary("c") show_image("../output/figures/plot_c.jpeg") # <a id="d"></a> # ### On OAC and oral NSAID (GI_P3D) # # Prescription of warfarin or a DOAC in the previous 3 months and a preascription of an oral NSAID in the previous 3 months. show_summary("d") show_image("../output/figures/plot_d.jpeg") # <a id="e"></a> # ### On OAC and antiplatelet without gastroprotection (GI_P3E) # # Prescription of warfarin or a DOAC in combination with an antiplatelet drug in the previous 3 months without co-prescription of an ulcer-healing drug. # # Note: "In combination" is defined as a co-prescription within 28 days of each other. # show_summary("e") show_image("../output/figures/plot_e.jpeg") # <a id="f"></a> # ### On aspirin and antiplatelet without gastroprotection (GI_P3F) # # Prescription of aspirin in combination with another antiplatelet drug in the previous 3 months without co-prescription of an ulcer-healing drug. # # Note: "In combination" is defined as a co-prescription within 28 days of each other. show_summary("f") show_image("../output/figures/plot_f.jpeg") # <a id="monitoring"></a> # ## Monitoring Indicators display( Markdown( f"Number of practices with at least one hazardous prescribing event throughout the study period: {summary['monitoring']['num_practices']}" ), Markdown( f"Number of unique patients at risk of a hazardous prescribing event throughout the study period: {summary['monitoring']['patients_denominator']:,}" ), Markdown( f"Number of unique patients experiencing at least one hazardous prescribing event throughout the study period: {summary['monitoring']['patients_numerator']:,}" ), Markdown( f"Number of hazardous prescribing events throughout the study period: {num_monitoring_events:,}" ), ) # <a id="ac"></a> # ### Age 75+, on ACEI or loop, no renal function/electrolytes test (MO_P13) # # Absence of a computer-recorded check of renal function or electrolytes in the previous 15 months in patients aged 75 or over who have been prescripted an ACEi of loop diuretic in the previous 6 months. show_summary("ac") show_image("../output/figures/plot_ac.jpeg") # <a id="me"></a> # ### Methotrexate audit (MO_P15) # <a id="me_no_fbc"></a> # #### On methotrexate without recorded full blood count # # Absence of a recorded full blood count in the previous 3 months in patients who have been receiving a methotrexate prescription for at least 3 months. show_summary("me_no_fbc") show_image("../output/figures/plot_me_no_fbc.jpeg") # <a id="me_no_lft"></a> # #### On methotrexate without recorded liver function test # # Absence of a recorded liver function test in the previous 3 months in patients who have been receiving a methotrexate prescription for at least 3 months. show_summary("me_no_lft") show_image("../output/figures/plot_me_no_lft.jpeg") # <a id="li"></a> # ### On lithium without recent lithium test (MO_P17) # # Absence of a recorded check of lithium concentration in the previous 3 months in patients who have been receiving a lithium prescription for at least 3 months. show_summary("li") show_image("../output/figures/plot_li.jpeg") # <a id="am"></a> # ### On amiodarone without recent thyroid function test (MO_P18) # # Absence of a recorded thyroid function test in the previous 6 months in patients who have been receiving a lithium prescription for at least 6 months. show_summary("am") show_image("../output/figures/plot_am.jpeg") # <a id="other"></a> # ## Other Indicators # display( Markdown( f"Number of practices with at least one hazardous prescribing event throughout the study period: {summary['other']['num_practices']}" ), Markdown( f"Number of unique patients at risk of a hazardous prescribing event throughout the study period: {summary['other']['patients_denominator']:,}" ), Markdown( f"Number of unique patients experiencing at least one hazardous prescribing event throughout the study period: {summary['other']['patients_numerator']:,}" ), Markdown( f"Number of hazardous prescribing events throughout the study period: {num_other_events:,}" ), ) # <a id="g"></a> # ### Asthma and non-selective BB (AS_P3G) # # Prescription of a non-selective beta-blocker in the previous 3 months in patients with a history of asthma. # # Note: History of asthma is defined as patients with a recorded code for asthma without a more recent asthma resolved code. show_summary("g") show_image("../output/figures/plot_g.jpeg") # <a id="i"></a> # ### Heart failure and oral NSAID (HF_P3I) # # Prescription of an oral NSAID in the previous 3 months in patients with heart failure. show_summary("i") show_image("../output/figures/plot_i.jpeg") # ### eGFR <45 and oral NSAID (KI_P3K) # # Prescription of an oral NSAID in the previous 3 months to patients with an eGFR < 45. show_summary("k") show_image("../output/figures/plot_k.jpeg")
16,334
/Dropout Analysis for Deep Nets/Dropout+Analysis.ipynb
6c631ebd8d0293b5fe57e83c3d8ce6394c19d2dd
[]
no_license
varshachawan/DeepLearningExperiments
https://github.com/varshachawan/DeepLearningExperiments
0
0
null
2019-02-05T02:32:18
2019-02-05T02:32:16
null
Jupyter Notebook
false
false
.py
3,465,559
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # On Dropout Analysis in Deep Nets # Load all the necessary packages # + import numpy as np import os os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=gpu,floatX=float32" import theano import keras from keras.datasets import cifar10 from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.optimizers import SGD from keras.utils import np_utils from keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt # %matplotlib inline from pylab import rcParams rcParams['figure.figsize'] = 20, 20 # - # ## Train on CIFAR-10 dataset # # #### Load CIFAR 10 dataset. # # CIFAR-10 is the widely used dataset in deep learning community to benchmark, validate and evaluate any new findings. # CIFAR-10 dataset contains around 60k images belonging to 10 classes. It contains 50k training and 10k test images. The dataset is available at http://www.cs.toronto.edu/~kriz/cifar.html . Please visit the webpage to know more about the dataset. # + cifar10 = np.load('./../data/data/lab2/lab2_data/cifar10_data.npz') X_train = cifar10['X_train'] y_train = cifar10['y_train'] X_test = cifar10['X_test'] y_test = cifar10['y_test'] print "Training data:" print "Number of examples: ", X_train.shape[0] print "Number of channels:",X_train.shape[1] print "Image size:", X_train.shape[2], X_train.shape[3] print print "Test data:" print "Number of examples:", X_test.shape[0] print "Number of channels:", X_test.shape[1] print "Image size:",X_test.shape[2], X_test.shape[3] # - # #### Visualize some images from CIFAR-10 dataset. # It contains 10 classes namely, airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck # + plot = [] for i in range(1,10): plot_image = X_train[100*i,:,:,:].transpose(1,2,0) for j in range(1,10): plot_image = np.concatenate((plot_image, X_train[100*i+j,:,:,:].transpose(1,2,0)), axis=1) if i==1: plot = plot_image else: plot = np.append(plot, plot_image, axis=0) plt.imshow(plot) plt.axis('off') plt.show() # - # #### Normalize the data. # + print "mean before normalization:", np.mean(X_train) print "std before normalization:", np.std(X_train) mean=[0,0,0] std=[0,0,0] newX_train = np.ones(X_train.shape) newX_test = np.ones(X_test.shape) for i in xrange(3): mean[i] = np.mean(X_train[:,i,:,:]) std[i] = np.std(X_train[:,i,:,:]) for i in xrange(3): newX_train[:,i,:,:] = X_train[:,i,:,:] - mean[i] newX_train[:,i,:,:] = newX_train[:,i,:,:] / std[i] newX_test[:,i,:,:] = X_test[:,i,:,:] - mean[i] newX_test[:,i,:,:] = newX_test[:,i,:,:] / std[i] X_train = newX_train X_test = newX_test print "mean after normalization:", np.mean(X_train) print "std after normalization:", np.std(X_train) # - # #### Specify Training Parameters # + batchSize = 512 #-- Training Batch Size num_classes = 10 #-- Number of classes in CIFAR-10 dataset num_epochs = 100 #-- Number of epochs for training learningRate= 0.001 #-- Learning rate for the network lr_weight_decay = 0.95 #-- Learning weight decay. Reduce the learn rate by 0.95 after epoch img_rows, img_cols = 32, 32 #-- input image dimensions Y_train = np_utils.to_categorical(y_train, num_classes) Y_test = np_utils.to_categorical(y_test, num_classes) # - # #### VGGnet-10 # + from keras import initializations import copy result = {} y = {} loss = [] acc = [] dropouts = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] for dropout in dropouts: print "Dropout: ", (dropout) model = Sequential() #-- layer 1 model.add(Convolution2D(64, 3, 3, border_mode='valid', input_shape=(3, img_rows, img_cols))) model.add(Dropout(dropout)) model.add(Convolution2D(64, 3, 3)) model.add(Dropout(dropout)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) ##--layer 2 model.add(Convolution2D(128, 3, 3)) model.add(Dropout(dropout)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) ##--layer 3 model.add(Convolution2D(256, 3, 3)) model.add(Dropout(dropout)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) ##-- layer 4 model.add(Flatten()) model.add(Dense(512)) model.add(Activation('relu')) #-- layer 5 model.add(Dense(512)) model.add(Activation('relu')) #-- layer 6 model.add(Dense(num_classes)) #-- loss model.add(Activation('softmax')) sgd = SGD(lr=learningRate, decay = lr_weight_decay) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) model_cce = model.fit(X_train, Y_train, batch_size=batchSize, nb_epoch=20, verbose=1, shuffle=True, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) y[dropout] = model.predict(X_test) print('Test score:', score[0]) print('Test accuracy:', score[1]) result[dropout] = copy.deepcopy(model_cce.history) loss.append(score[0]) acc.append(score[1]) print models # - # ## Plotting Results. # + import numpy as np import matplotlib.pyplot as plt width = 0.1 plt.bar(dropouts, acc, width, align='center') plt.tick_params(axis='both', which='major', labelsize=35) plt.tick_params(axis='both', which='minor', labelsize=35) plt.ylabel('Accuracy',size = 30) plt.xlabel('Dropout', size = 30) plt.show() # + import numpy as np import matplotlib.pyplot as plt width = 0.1 plt.bar(dropouts, loss, width, align='center',color = 'green') plt.tick_params(axis='both', which='major', labelsize=35) plt.tick_params(axis='both', which='minor', labelsize=35) plt.ylabel('Loss',size = 30) plt.xlabel('Dropout', size = 30) plt.show()
6,862
/cirrhosis_prediction.ipynb
619765bf5a51de8e1f27ce1762aa1cea4a436f7a
[]
no_license
AlexKryshtop/Cirrhosis-Prediction
https://github.com/AlexKryshtop/Cirrhosis-Prediction
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
1,416,701
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/AlexKryshtop/Cirrhosis-Prediction/blob/main/cirrhosis_prediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 1.360597, "end_time": "2021-09-15T13:12:10.618170", "exception": false, "start_time": "2021-09-15T13:12:09.257573", "status": "completed"} id="6ea35e19" outputId="9c9323f7-db72-4a84-8f14-57c786346000" import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, StackingClassifier, BaggingClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.preprocessing import LabelEncoder, StandardScaler from sklearn.impute import KNNImputer from collections import defaultdict from sklearn.metrics import roc_auc_score, roc_auc_score from itertools import product from tqdm import tqdm import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # + papermill={"duration": 0.119194, "end_time": "2021-09-15T13:12:10.802374", "exception": false, "start_time": "2021-09-15T13:12:10.683180", "status": "completed"} id="bca7201c" outputId="3db5e057-e142-4026-f653-83f526a31a61" df = pd.read_csv('../input/cirrhosis-prediction-dataset/cirrhosis.csv') df.head() # + [markdown] papermill={"duration": 0.065198, "end_time": "2021-09-15T13:12:10.934234", "exception": false, "start_time": "2021-09-15T13:12:10.869036", "status": "completed"} id="884841ec" # **Attribute Information** # 1) ID: unique identifier # 2) N_Days: number of days between registration and the earlier of death, transplantation, or study analysis time in July 1986 # 3) Status: status of the patient C (censored), CL (censored due to liver tx), or D (death) # 4) Drug: type of drug D-penicillamine or placebo # 5) Age: age in [days] # 6) Sex: M (male) or F (female) # 7) Ascites: presence of ascites N (No) or Y (Yes) # 8) Hepatomegaly: presence of hepatomegaly N (No) or Y (Yes) # 9) Spiders: presence of spiders N (No) or Y (Yes) # 10) Edema: presence of edema N (no edema and no diuretic therapy for edema), S (edema present without diuretics, or edema resolved by diuretics), or Y (edema despite diuretic therapy) # 11) Bilirubin: serum bilirubin in [mg/dl] # 12) Cholesterol: serum cholesterol in [mg/dl] # 13) Albumin: albumin in [gm/dl] # 14) Copper: urine copper in [ug/day] # 15) Alk_Phos: alkaline phosphatase in [U/liter] # 16) SGOT: SGOT in [U/ml] # 17) Triglycerides: triglicerides in [mg/dl] # 18) Platelets: platelets per cubic [ml/1000] # 19) Prothrombin: prothrombin time in seconds [s] # 20) Stage: histologic stage of disease (1, 2, 3, or 4) # + [markdown] papermill={"duration": 0.06514, "end_time": "2021-09-15T13:12:11.065054", "exception": false, "start_time": "2021-09-15T13:12:10.999914", "status": "completed"} id="f8ea9909" # # Solving missing values problem # + papermill={"duration": 0.090965, "end_time": "2021-09-15T13:12:11.222141", "exception": false, "start_time": "2021-09-15T13:12:11.131176", "status": "completed"} id="2d3ea677" outputId="34d104e0-e778-4394-8237-4255dda64545" df.info() # + [markdown] papermill={"duration": 0.066686, "end_time": "2021-09-15T13:12:11.354716", "exception": false, "start_time": "2021-09-15T13:12:11.288030", "status": "completed"} id="bc503e29" # ## Categorical values encoding # + [markdown] papermill={"duration": 0.065409, "end_time": "2021-09-15T13:12:11.486763", "exception": false, "start_time": "2021-09-15T13:12:11.421354", "status": "completed"} id="b06bd195" # * First of all, dataframe is relatively small. *So we should try to get as much as it possible from each record in given table*. # * Those observations that have missing values in 'Drug' column - *group that allowed to gather their anamnesis but **refused** to participate in drug test*. So this group of people have label *'NotParticipated'* in *'Drug'* column. # + papermill={"duration": 0.104583, "end_time": "2021-09-15T13:12:11.657363", "exception": false, "start_time": "2021-09-15T13:12:11.552780", "status": "completed"} id="23e99e85" outputId="63368ce1-2241-48d5-af04-d8d895b6dc83" df['Drug'].value_counts() # + papermill={"duration": 0.100341, "end_time": "2021-09-15T13:12:11.828529", "exception": false, "start_time": "2021-09-15T13:12:11.728188", "status": "completed"} id="0746622a" outputId="a26efdd9-5cf9-4989-9475-1885d1b9a439" df['Drug'] = df['Drug'].fillna('NotParticipated') df # + [markdown] papermill={"duration": 0.066388, "end_time": "2021-09-15T13:12:11.961366", "exception": false, "start_time": "2021-09-15T13:12:11.894978", "status": "completed"} id="4da1a5a6" # * There NaN values in both: categorical columns and numerical # * Before solving NaN-values-problem we will encode cat. variables. But there will be used custom encoder wich allows to encode, skipping NaN values and perform encoding under a group of columns. # * To fill in empty values, we will use imputation based on KNN alg. This kind of imputation solves supervised learning problem for each column with missing values, considering such col as target. # + papermill={"duration": 0.079217, "end_time": "2021-09-15T13:12:12.106730", "exception": false, "start_time": "2021-09-15T13:12:12.027513", "status": "completed"} id="b1923e0d" def encode_with_nan(data): """ Encode cat columns in df skipping nan. 'fit_transform' in LabelEncoder() """ cat_cols = data.dtypes cat_cols = list(cat_cols[cat_cols == 'object'].index) decoder = dict.fromkeys(cat_cols, dict()) for col in cat_cols: vals = list(data[col].unique()) if np.nan in vals: vals.remove(np.nan) d = dict.fromkeys(vals, None) for val in enumerate(vals): d[val[1]] = val[0] decoder[col] = d data[col] = data[col].apply(lambda x: d[x] if x in d.keys() else x) return data, decoder def decode_cat(data, decoder): """ Decode cat columns in df skipping nan. 'inverse_transform' in LabelEncoder() """ cat_cols = list(decoder.keys()) for col in cat_cols: vals = list(data[col].unique()) if np.nan in vals: vals.remove(np.nan) keys = list(decoder[col].keys()) data[col] = data[col].apply(lambda x: keys[list(decoder[col].values()).index(x)] if x in decoder[col].values() else x) return data # + papermill={"duration": 0.081846, "end_time": "2021-09-15T13:12:12.254780", "exception": false, "start_time": "2021-09-15T13:12:12.172934", "status": "completed"} id="d71dd167" outputId="9df3cba9-0bb6-4592-e81b-db92dbcac44a" df, decoder = encode_with_nan(df) decoder # + [markdown] papermill={"duration": 0.068446, "end_time": "2021-09-15T13:12:12.389985", "exception": false, "start_time": "2021-09-15T13:12:12.321539", "status": "completed"} id="b8ba2440" # ## KNN-Imputer # + papermill={"duration": 0.074847, "end_time": "2021-09-15T13:12:12.532006", "exception": false, "start_time": "2021-09-15T13:12:12.457159", "status": "completed"} id="4e4c6f0f" def knn_imputer2df(data, n_neighbors=4, weights='distance'): """ Replaces nan values with prediction by KNN. Can be customed (n_neighbours, weights) """ df_column_names = list(data.columns) imputer = KNNImputer(n_neighbors=n_neighbors, weights=weights) data = imputer.fit_transform(data) return pd.DataFrame(data=data, columns=df_column_names) # + papermill={"duration": 0.175372, "end_time": "2021-09-15T13:12:12.774311", "exception": false, "start_time": "2021-09-15T13:12:12.598939", "status": "completed"} id="9aa9c0c4" outputId="4d89d785-5686-45c5-a84e-295972d6f166" df = knn_imputer2df(df) df # + papermill={"duration": 0.132103, "end_time": "2021-09-15T13:12:12.987110", "exception": false, "start_time": "2021-09-15T13:12:12.855007", "status": "completed"} id="ec3632c4" outputId="8821f35c-fe19-4a27-ed17-2003d8c73724" df.describe() # + [markdown] papermill={"duration": 0.069322, "end_time": "2021-09-15T13:12:13.126200", "exception": false, "start_time": "2021-09-15T13:12:13.056878", "status": "completed"} id="6878239a" # * **Here one can see that category columns in past was fill in by floating numbers.** # * Lets leave it as it is. this values can be interpreted as proximity to 'floor' and 'round' existing values. If this treatment will decrease performance we will round this observations. # + [markdown] papermill={"duration": 0.068328, "end_time": "2021-09-15T13:12:13.263126", "exception": false, "start_time": "2021-09-15T13:12:13.194798", "status": "completed"} id="4682c410" # # EDA # + [markdown] papermill={"duration": 0.069681, "end_time": "2021-09-15T13:12:13.401040", "exception": false, "start_time": "2021-09-15T13:12:13.331359", "status": "completed"} id="f6a8e016" # 1. Decode DataFrame to represent relationships clearly # 2. Analysys and plots # + papermill={"duration": 0.076354, "end_time": "2021-09-15T13:12:13.546184", "exception": false, "start_time": "2021-09-15T13:12:13.469830", "status": "completed"} id="b82929b2" def round_encoded_cat_features(data, decoder): columns = list(decoder.keys()) data.loc[:, columns] = data.loc[:, columns].apply(round) data['Stage'] = data['Stage'].apply(round) return data # + papermill={"duration": 0.109719, "end_time": "2021-09-15T13:12:13.723878", "exception": false, "start_time": "2021-09-15T13:12:13.614159", "status": "completed"} id="026aa99a" outputId="cdf36e6d-19fa-48af-8736-9a73b0e07d89" temp_df = df.copy() temp_df = round_encoded_cat_features(temp_df, decoder) temp_df # + papermill={"duration": 0.113911, "end_time": "2021-09-15T13:12:13.906675", "exception": false, "start_time": "2021-09-15T13:12:13.792764", "status": "completed"} id="1cb42bea" outputId="d6d8f08f-aec3-4a3e-9d34-4f8460b071cb" temp_df = decode_cat(temp_df, decoder) temp_df # + papermill={"duration": 0.277777, "end_time": "2021-09-15T13:12:14.254682", "exception": false, "start_time": "2021-09-15T13:12:13.976905", "status": "completed"} id="d95dd0b4" outputId="959eaf82-3785-4242-a29a-4d0309d75c03" plt.figure(figsize=(12, 8)) plt.title('Cirrhosis stage and treatment') sns.boxplot(x=temp_df['Drug'], y=temp_df['Stage']); # + [markdown] papermill={"duration": 0.072476, "end_time": "2021-09-15T13:12:14.396907", "exception": false, "start_time": "2021-09-15T13:12:14.324431", "status": "completed"} id="63b50533" # *First inspection: Those who use Placebo tend to have illness progression* # + papermill={"duration": 0.414455, "end_time": "2021-09-15T13:12:14.880864", "exception": false, "start_time": "2021-09-15T13:12:14.466409", "status": "completed"} id="207d4a0e" outputId="0e6d3b84-fd22-4b05-9ca3-c09a10911fa0" plt.figure(figsize=(12, 8)) plt.title('Days spent in observation and stage') sns.scatterplot(x=temp_df['N_Days'], y=temp_df['Stage'], hue=temp_df['Drug']); # + [markdown] papermill={"duration": 0.070966, "end_time": "2021-09-15T13:12:15.022865", "exception": false, "start_time": "2021-09-15T13:12:14.951899", "status": "completed"} id="7bbef8b7" # *not informative plot* # + papermill={"duration": 0.081104, "end_time": "2021-09-15T13:12:15.175480", "exception": false, "start_time": "2021-09-15T13:12:15.094376", "status": "completed"} id="4a55d6cd" outputId="82246e4b-5e8f-4472-c74b-c147295d6160" temp_df['Sex'].value_counts() # + papermill={"duration": 0.222676, "end_time": "2021-09-15T13:12:15.471763", "exception": false, "start_time": "2021-09-15T13:12:15.249087", "status": "completed"} id="5c4d10a6" outputId="1cde21ae-2b3e-4505-a7d7-3408bbcbb634" plt.figure(figsize=(12, 8)) plt.title('Cirrhosis stage and Sex') sns.boxplot(x=temp_df['Sex'], y=temp_df['Stage']); # + [markdown] papermill={"duration": 0.071836, "end_time": "2021-09-15T13:12:15.616283", "exception": false, "start_time": "2021-09-15T13:12:15.544447", "status": "completed"} id="0441f6f7" # *not informative (disbalanced)* # + papermill={"duration": 0.539408, "end_time": "2021-09-15T13:12:16.227577", "exception": false, "start_time": "2021-09-15T13:12:15.688169", "status": "completed"} id="7251eba7" outputId="2e4fe0ca-8dc7-44fc-ec57-e4fe841fede8" plt.figure(figsize=(12, 8)) plt.title('Cirrhosis stage and patient status') sns.boxplot(x=temp_df['Status'], y=temp_df['Stage'], hue=temp_df['Drug']); # + [markdown] papermill={"duration": 0.0845, "end_time": "2021-09-15T13:12:16.385321", "exception": false, "start_time": "2021-09-15T13:12:16.300821", "status": "completed"} id="cc391b7b" # *STATUS????* # + papermill={"duration": 0.411565, "end_time": "2021-09-15T13:12:16.878682", "exception": false, "start_time": "2021-09-15T13:12:16.467117", "status": "completed"} id="25199780" outputId="c3cb45df-a730-4aba-8c92-452af14f9ebf" plt.figure(figsize=(12, 8)) plt.title('Cirrhosis stage and age') sns.scatterplot(x=temp_df['Age'] / 365, y=temp_df['Stage'], hue=temp_df['Drug']); # + papermill={"duration": 0.083494, "end_time": "2021-09-15T13:12:17.042071", "exception": false, "start_time": "2021-09-15T13:12:16.958577", "status": "completed"} id="e86656f8" outputId="935e1830-7af4-46a3-ddda-6d3187471992" temp_df['Age'].mean() / 365 # + papermill={"duration": 0.085781, "end_time": "2021-09-15T13:12:17.205290", "exception": false, "start_time": "2021-09-15T13:12:17.119509", "status": "completed"} id="852efdcc" outputId="d25aa343-38cb-4254-ce5c-6452dcc83812" temp_df['Stage'].value_counts() # + papermill={"duration": 3.977942, "end_time": "2021-09-15T13:12:21.258364", "exception": false, "start_time": "2021-09-15T13:12:17.280422", "status": "completed"} id="0a3c69aa" outputId="3a0c2211-85cc-43d3-eac5-b8675509d6a0" diseases = list(temp_df.iloc[:,6:-1].columns) for d in diseases: if temp_df[d].dtype == 'object': plt.figure(figsize=(8,4)) name = 'Cirrhosis stage and ' + str(d) plt.title(name) sns.boxplot(x=temp_df[d], y=temp_df['Stage'], hue=temp_df['Drug']); else: plt.figure(figsize=(8,4)) name = 'Cirrhosis stage and ' + str(d) plt.title(name) sns.scatterplot(x=temp_df[d], y=temp_df['Stage'], hue=temp_df['Drug']); # + [markdown] papermill={"duration": 0.136715, "end_time": "2021-09-15T13:12:21.482869", "exception": false, "start_time": "2021-09-15T13:12:21.346154", "status": "completed"} id="c92b788b" # ***So one can observe relations between features and target. It might be some outliers in bilirium, chole., copper, alk_ph columns.*** # + papermill={"duration": 0.600389, "end_time": "2021-09-15T13:12:22.172440", "exception": false, "start_time": "2021-09-15T13:12:21.572051", "status": "completed"} id="356645f6" outputId="d532b764-b61c-4d94-ed62-857f0267bbaf" plt.figure(figsize=(12,8)) plt.title('Feature corr') sns.heatmap(df.corr()); # + [markdown] papermill={"duration": 0.088885, "end_time": "2021-09-15T13:12:22.350684", "exception": false, "start_time": "2021-09-15T13:12:22.261799", "status": "completed"} id="1ec6104c" # *ID column can be dropped, but let`s leave it* # + papermill={"duration": 0.095558, "end_time": "2021-09-15T13:12:22.536678", "exception": false, "start_time": "2021-09-15T13:12:22.441120", "status": "completed"} id="0303f0ee" #df = df.drop('ID', axis=1) # + papermill={"duration": 3.660898, "end_time": "2021-09-15T13:12:26.316683", "exception": false, "start_time": "2021-09-15T13:12:22.655785", "status": "completed"} id="54380207" outputId="94ad963a-4843-4792-95f2-18a75ee84335" for col in list(temp_df.iloc[:,1:].columns): plt.figure(figsize=(8, 4)) name = str(col) + ' Distribution' plt.title(name) sns.histplot(temp_df[col]); # + [markdown] papermill={"duration": 0.10096, "end_time": "2021-09-15T13:12:26.518907", "exception": false, "start_time": "2021-09-15T13:12:26.417947", "status": "completed"} id="c6de0b3f" # **Log features transform** # + papermill={"duration": 0.137257, "end_time": "2021-09-15T13:12:26.756219", "exception": false, "start_time": "2021-09-15T13:12:26.618962", "status": "completed"} id="19380303" outputId="0594d266-bcdf-4242-cdbc-294d3aa27af9" df # + papermill={"duration": 0.124739, "end_time": "2021-09-15T13:12:26.980873", "exception": false, "start_time": "2021-09-15T13:12:26.856134", "status": "completed"} id="3c912687" columns = ['Bilirubin', 'Cholesterol', 'Copper', 'Alk_Phos', 'SGOT', 'Tryglicerides', 'Platelets', 'Prothrombin'] for col in columns: temp_df[col] = temp_df[col].apply(lambda x: np.log(x + 1)) df[col] = df[col].apply(lambda x: np.log(x + 1)) # + papermill={"duration": 3.490747, "end_time": "2021-09-15T13:12:30.571127", "exception": false, "start_time": "2021-09-15T13:12:27.080380", "status": "completed"} id="2392f751" outputId="716e67cc-dcec-41da-e115-65795a747f21" for col in list(temp_df.iloc[:,1:].columns): plt.figure(figsize=(8, 4)) name = str(col) + ' Distribution' plt.title(name) sns.histplot(temp_df[col]); # + [markdown] papermill={"duration": 0.114288, "end_time": "2021-09-15T13:12:30.796895", "exception": false, "start_time": "2021-09-15T13:12:30.682607", "status": "completed"} id="6da47de6" # # Handling Outliers # + papermill={"duration": 0.123858, "end_time": "2021-09-15T13:12:31.033853", "exception": false, "start_time": "2021-09-15T13:12:30.909995", "status": "completed"} id="6c098475" def remove_outliers(df, column_list): for col in column_list: Q1 = np.quantile(df[col], 0.25) Q3 = np.quantile(df[col], 0.75) IQR = Q3 - Q1 drop_outliers = [x for x in df[col] if ( (x > Q1 - 1.5 * IQR) & (x < Q3 + 1.5 * IQR))] df = df.loc[df[col].isin(drop_outliers)] return df # + [markdown] papermill={"duration": 0.110626, "end_time": "2021-09-15T13:12:31.254936", "exception": false, "start_time": "2021-09-15T13:12:31.144310", "status": "completed"} id="6dddcee6" # # Scaling # + papermill={"duration": 0.128133, "end_time": "2021-09-15T13:12:31.494779", "exception": false, "start_time": "2021-09-15T13:12:31.366646", "status": "completed"} id="a6016fe5" outputId="5d8563b0-bef8-4b1c-99de-62f51709b869" X, y = df.drop('Stage', axis=1), df['Stage'] y = y.apply(int) scaler = StandardScaler() X_scaled = scaler.fit_transform(X) X.shape # + papermill={"duration": 0.120197, "end_time": "2021-09-15T13:12:31.729467", "exception": false, "start_time": "2021-09-15T13:12:31.609270", "status": "completed"} id="3c3751db" X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.33, random_state=17) # + [markdown] papermill={"duration": 0.112233, "end_time": "2021-09-15T13:12:31.952984", "exception": false, "start_time": "2021-09-15T13:12:31.840751", "status": "completed"} id="069b7051" # # Models # + [markdown] papermill={"duration": 0.111451, "end_time": "2021-09-15T13:12:32.176336", "exception": false, "start_time": "2021-09-15T13:12:32.064885", "status": "completed"} id="17885f90" # So we faced problem of making decision which approach is better. # 1. We can treat outliers performing log transform, deleting observation using quantiles or just leave them; # 2. We can drop or leave *'ID'* column; # 3. We can use floating encoding for categorical features or just round them. # # + [markdown] papermill={"duration": 0.111482, "end_time": "2021-09-15T13:12:32.399151", "exception": false, "start_time": "2021-09-15T13:12:32.287669", "status": "completed"} id="fcad2253" # Let`s try each combination of possible data preparation while training models. It will take some time, but results may surprise. # + [markdown] papermill={"duration": 0.112616, "end_time": "2021-09-15T13:12:32.625429", "exception": false, "start_time": "2021-09-15T13:12:32.512813", "status": "completed"} id="8c770838" # **models**: # 1. LogReg; # 2. LogReg with cross-validation; # 3. KNN; # 4. KNN with cross-validation; # 5. SVM; # 6. SVM with cross-validation; # 7. RandomForest; # 8. RandomForest with cross-validation; # 9. GradientBoosting; # 10. GradientBoosting with cross-validation. # + [markdown] papermill={"duration": 0.110594, "end_time": "2021-09-15T13:12:32.847084", "exception": false, "start_time": "2021-09-15T13:12:32.736490", "status": "completed"} id="b3b8a4b2" # *Cross-Validation* will be using scoring 'roc_auc_ovo' and 5 fields # + papermill={"duration": 0.121772, "end_time": "2021-09-15T13:12:33.079712", "exception": false, "start_time": "2021-09-15T13:12:32.957940", "status": "completed"} id="691dcf1e" def add_performance_to_df(df_name, name_model, model, train_X, train_y, test_X, test_y): adder = {'model' : '', 'train_roc_auc_score_ovo': '', 'train_roc_auc_score_ovr': '', 'test_roc_auc_score_ovo': '', 'test_roc_auc_score_ovr': ''} train_proba_predictions = model.predict_proba(train_X) test_proba_predictions = model.predict_proba(test_X) adder['model'] = name_model adder['train_roc_auc_score_ovo'] = roc_auc_score(y_true=train_y, y_score = train_proba_predictions, average='macro', multi_class='ovo') adder['train_roc_auc_score_ovr'] = roc_auc_score(y_true=train_y, y_score = train_proba_predictions, average='macro', multi_class='ovr') adder['test_roc_auc_score_ovo'] = roc_auc_score(y_true=test_y, y_score = test_proba_predictions, average='macro', multi_class='ovo') adder['test_roc_auc_score_ovr'] = roc_auc_score(y_true=test_y, y_score = test_proba_predictions, average='macro', multi_class='ovr') return df_name.append(adder, ignore_index=True) def get_models_performance(models, X_train, y_train, X_test, y_test): cols = ['model', 'train_roc_auc_score_ovo', 'train_roc_auc_score_ovr', 'test_roc_auc_score_ovo', 'test_roc_auc_score_ovr'] model_performance = pd.DataFrame(columns=cols) for key in models: model_performance = add_performance_to_df(model_performance, key, models[key], X_train, y_train, X_test, y_test) return model_performance # + [markdown] papermill={"duration": 0.112188, "end_time": "2021-09-15T13:12:33.304837", "exception": false, "start_time": "2021-09-15T13:12:33.192649", "status": "completed"} id="708774c9" # **Functions that make stats for models performance** # + papermill={"duration": 0.121101, "end_time": "2021-09-15T13:12:33.537343", "exception": false, "start_time": "2021-09-15T13:12:33.416242", "status": "completed"} id="de9b4ad4" def data_prep(outliers_treatment=None, drop_id=False, floating_cat_features_encoding=False): data = pd.read_csv('../input/cirrhosis-prediction-dataset/cirrhosis.csv') data['Drug'] = data['Drug'].fillna('NotParticipated') data, decoder = encode_with_nan(data) data = knn_imputer2df(data) if not floating_cat_features_encoding: data = round_encoded_cat_features(data, decoder) if outliers_treatment == 'log': columns = ['Bilirubin', 'Cholesterol', 'Copper', 'Alk_Phos', 'SGOT', 'Tryglicerides', 'Platelets', 'Prothrombin'] for col in columns: data[col] = data[col].apply(lambda x: np.log(x + 1)) if outliers_treatment == 'IQR': columns = ['Bilirubin', 'Cholesterol', 'Copper', 'Alk_Phos', 'SGOT', 'Tryglicerides', 'Platelets', 'Prothrombin'] data = remove_outliers(data, columns) if drop_id: data = data.drop('ID', axis=1) X, y = data.drop('Stage', axis=1), data['Stage'] y = y.apply(int) scaler = StandardScaler() X_scaled = scaler.fit_transform(X) return X_scaled, y # + papermill={"duration": 0.133556, "end_time": "2021-09-15T13:12:33.808021", "exception": false, "start_time": "2021-09-15T13:12:33.674465", "status": "completed"} id="0db1973e" def pipeline(outliers_treatment=None, drop_id=False, floating_cat_features_encoding=False): X_scaled, y = data_prep(outliers_treatment=outliers_treatment, drop_id=drop_id, floating_cat_features_encoding=floating_cat_features_encoding) print('X shape: ', X_scaled.shape, ' y shape: ', y.shape) X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.33, random_state=17) # lr lr = LogisticRegression(random_state=17) lr.fit(X_train, y_train) # ---------------------------------- # lr_gcv lr_params = {'solver': ['sag', 'saga', 'liblinear'], 'C': np.logspace(-3, 1, 5), 'penalty': ['l1', 'l2', 'elasticnet'], 'class_weight': ['balanced', None] } lr_gcv = GridSearchCV(estimator=LogisticRegression(random_state=17), param_grid=lr_params, scoring='roc_auc_ovo', cv=5, verbose=True, n_jobs=-1) lr_gcv.fit(X_train, y_train) best_lr = lr_gcv.best_estimator_ # ----------------------------- # knn knn = KNeighborsClassifier() knn.fit(X_train, y_train) # -------------------------- # knn_gcv knn_params = {'n_neighbors': range(2, 8), 'weights': ['uniform', 'distance'], 'algorithm': ['ball_tree', 'kd_tree'], 'p': range(1, 5)} knn_gcv = GridSearchCV(estimator=KNeighborsClassifier(), param_grid=knn_params, scoring='roc_auc_ovo', cv=5, verbose=True, n_jobs=-1) knn_gcv.fit(X_train, y_train) best_knn = knn_gcv.best_estimator_ # -------------------- # svm svm = SVC(probability=True, random_state=17) svm.fit(X_train, y_train) # -------------------- # svm_gcv svm_params = {'C': np.logspace(-4, 1, 6), 'kernel': ['linear', 'poly', 'rbf', 'sigmoid'], 'class_weight': ['balanced', None]} svm_gcv = GridSearchCV(estimator=SVC(random_state=17, probability=True), param_grid=svm_params, scoring='roc_auc_ovo', cv=5, verbose=True, n_jobs=-1) svm_gcv.fit(X_train, y_train) best_svm = svm_gcv.best_estimator_ # ----------------------- # rf rf = RandomForestClassifier(random_state=17) rf.fit(X_train, y_train) # ------------------------------------ # rf_gcv rf_params = {'n_estimators': range(20, 120, 20), 'criterion': ['gini', 'entropy'], 'min_samples_split': range(2, 8, 2), 'min_samples_leaf': range(1, 5), 'class_weight': ['balanced', 'balanced_subsample', None]} rf_gcv = GridSearchCV(estimator=RandomForestClassifier(random_state=17), param_grid=rf_params, scoring='roc_auc_ovo', cv=5, verbose=True, n_jobs=-1) rf_gcv.fit(X_train, y_train) best_rf = rf_gcv.best_estimator_ # ---------------------------- # gb gb = GradientBoostingClassifier(random_state=17) gb.fit(X_train, y_train) # ---------------------------- # gb_gcv gb_params = {'loss': ['deviance', 'exponential'], 'learning_rate': np.logspace(-4, 1, 4), 'n_estimators': range(60, 160, 20), 'min_samples_split': range(2, 8, 2), 'min_samples_leaf': range(1, 5), 'max_features': ['sqrt', 'log2']} gb_gcv = GridSearchCV(estimator=GradientBoostingClassifier(random_state=17), param_grid=gb_params, scoring='roc_auc_ovo', cv=5, verbose=True, n_jobs=-1) gb_gcv.fit(X_train, y_train) best_gb = gb_gcv.best_estimator_ lr = {'LogReg': lr} best_lr = {'LogReg_CV': best_lr} knn = {'KNN': knn} best_knn = {'KNN-CV': best_knn} svm = {'SVM': svm} best_svm = {'SVM-CV': best_svm} rf = {'RandomForest': rf} best_rf = {'RandomForest_CV': best_rf} gb = {'GradBoost': gb} best_gb = {'GradBoost_CV': best_gb} models = {**lr, **best_lr, **knn, **best_knn, **svm, **best_svm, **rf, **best_rf, **gb, **best_gb} print('making perform...') model_performance = get_models_performance(models=models, X_train=X_train, y_train=y_train, X_test=X_test, y_test=y_test) print('Perform ready') return model_performance, models # + papermill={"duration": 0.120774, "end_time": "2021-09-15T13:12:34.040422", "exception": false, "start_time": "2021-09-15T13:12:33.919648", "status": "completed"} id="884bc4e2" outputId="e3ff029a-0eab-4001-f75a-2273d3ed18e1" outliers = ['log', 'IQR', None] drop_id = [False, True] fcfe = [False, True] combinations = list(product(outliers, drop_id, fcfe)) combinations # + [markdown] papermill={"duration": 0.112519, "end_time": "2021-09-15T13:12:34.264952", "exception": false, "start_time": "2021-09-15T13:12:34.152433", "status": "completed"} id="00df4d1b" # 1. We ll test any possible combinations of preprocess steps and compare models results. Then will choose best approach of data prep # + papermill={"duration": 5329.749206, "end_time": "2021-09-15T14:41:24.125927", "exception": false, "start_time": "2021-09-15T13:12:34.376721", "status": "completed"} id="909bfcbd" outputId="de021ac5-4b2d-4e8f-859f-eacfe6bee305" perf_n_models = [] for combo in tqdm(combinations): print(combo) perf_n_models.append(pipeline(*combo)) # + [markdown] papermill={"duration": 0.185048, "end_time": "2021-09-15T14:41:24.497048", "exception": false, "start_time": "2021-09-15T14:41:24.312000", "status": "completed"} id="925af980" # ## Finding Optimal Approach # + [markdown] papermill={"duration": 0.18939, "end_time": "2021-09-15T14:41:24.875680", "exception": false, "start_time": "2021-09-15T14:41:24.686290", "status": "completed"} id="4ff21ca5" # Let`s look at descriptive statistics # + papermill={"duration": 0.205313, "end_time": "2021-09-15T14:41:25.267744", "exception": false, "start_time": "2021-09-15T14:41:25.062431", "status": "completed"} id="7c93dc79" outputId="bc25acf1-ebb5-4b43-92c5-21d581881b9f" # ('log', False, False) perf_n_models[0][0] # + papermill={"duration": 0.210469, "end_time": "2021-09-15T14:41:25.663946", "exception": false, "start_time": "2021-09-15T14:41:25.453477", "status": "completed"} id="b63d0b56" outputId="31849ea8-7a87-4610-8085-9f5b7a4a5b2d" perf_n_models[0][0].describe() # + papermill={"duration": 0.200494, "end_time": "2021-09-15T14:41:26.049382", "exception": false, "start_time": "2021-09-15T14:41:25.848888", "status": "completed"} id="1af1f3cb" outputId="5c59f6f6-e933-413e-eb87-8f36f3eac3fa" # ('log', False, True), perf_n_models[1][0] # + papermill={"duration": 0.209711, "end_time": "2021-09-15T14:41:26.446051", "exception": false, "start_time": "2021-09-15T14:41:26.236340", "status": "completed"} id="4258185c" outputId="3ad05368-1891-4191-80ea-42bb5b651a84" perf_n_models[1][0].describe() # + papermill={"duration": 0.200503, "end_time": "2021-09-15T14:41:26.831867", "exception": false, "start_time": "2021-09-15T14:41:26.631364", "status": "completed"} id="72c72941" outputId="1967064e-1f7a-4ec9-ba34-58370bdfaf3b" # ('log', True, False), perf_n_models[2][0] # + papermill={"duration": 0.213387, "end_time": "2021-09-15T14:41:27.234207", "exception": false, "start_time": "2021-09-15T14:41:27.020820", "status": "completed"} id="691f0b74" outputId="134901cc-0909-4888-9ad3-5d553aac2adb" perf_n_models[2][0].describe() # + papermill={"duration": 0.204212, "end_time": "2021-09-15T14:41:27.624946", "exception": false, "start_time": "2021-09-15T14:41:27.420734", "status": "completed"} id="044ddbff" outputId="59c84c77-4f56-48a5-f85f-da4c741d4c07" # ('log', True, True), perf_n_models[3][0] # + papermill={"duration": 0.212405, "end_time": "2021-09-15T14:41:28.024361", "exception": false, "start_time": "2021-09-15T14:41:27.811956", "status": "completed"} id="c750ca1b" outputId="9eb4be54-0731-4f31-cf38-24cc86d74133" perf_n_models[3][0].describe() # + papermill={"duration": 0.203337, "end_time": "2021-09-15T14:41:28.416160", "exception": false, "start_time": "2021-09-15T14:41:28.212823", "status": "completed"} id="af6b4d14" outputId="19f13311-73b0-4fb5-c106-3b6870001beb" #('IQR', False, False), perf_n_models[4][0] # + papermill={"duration": 0.228718, "end_time": "2021-09-15T14:41:28.853027", "exception": false, "start_time": "2021-09-15T14:41:28.624309", "status": "completed"} id="e545045f" outputId="f4efbf89-ad25-4a13-c006-a93bf5ad271f" perf_n_models[4][0].describe() # + papermill={"duration": 0.205786, "end_time": "2021-09-15T14:41:29.246036", "exception": false, "start_time": "2021-09-15T14:41:29.040250", "status": "completed"} id="69e35cb8" outputId="0f7eedbd-b35b-40e6-eff8-44c050eca029" #('IQR', False, True), perf_n_models[5][0] # + papermill={"duration": 0.214826, "end_time": "2021-09-15T14:41:29.648966", "exception": false, "start_time": "2021-09-15T14:41:29.434140", "status": "completed"} id="d2824120" outputId="3de01261-20e4-46f4-f1de-4ddcfd1cfabe" perf_n_models[5][0].describe() # + papermill={"duration": 0.204227, "end_time": "2021-09-15T14:41:30.042183", "exception": false, "start_time": "2021-09-15T14:41:29.837956", "status": "completed"} id="6232d0b4" outputId="a3300aa8-f659-42b6-9fcb-b9bc0c8cdfd8" #('IQR', True, False), perf_n_models[6][0] # + papermill={"duration": 0.212986, "end_time": "2021-09-15T14:41:30.446369", "exception": false, "start_time": "2021-09-15T14:41:30.233383", "status": "completed"} id="f4056f56" outputId="c953ef00-24bb-4367-95ed-df4bfb1beb36" perf_n_models[6][0].describe() # + papermill={"duration": 0.227447, "end_time": "2021-09-15T14:41:30.863199", "exception": false, "start_time": "2021-09-15T14:41:30.635752", "status": "completed"} id="6135959c" outputId="56d92e4a-fe22-4a59-f61a-caca9fc946c0" #('IQR', True, True), perf_n_models[7][0] # + papermill={"duration": 0.24056, "end_time": "2021-09-15T14:41:31.321317", "exception": false, "start_time": "2021-09-15T14:41:31.080757", "status": "completed"} id="86a6ea64" outputId="3c67f326-65bc-4396-e2e4-d43cb60b657b" perf_n_models[7][0].describe() # + papermill={"duration": 0.211059, "end_time": "2021-09-15T14:41:31.733673", "exception": false, "start_time": "2021-09-15T14:41:31.522614", "status": "completed"} id="56ad943f" outputId="b50ee211-bf03-47a0-fb21-87896d9867da" #(None, False, False), perf_n_models[8][0] # + papermill={"duration": 0.224522, "end_time": "2021-09-15T14:41:32.157320", "exception": false, "start_time": "2021-09-15T14:41:31.932798", "status": "completed"} id="ca570bfd" outputId="c43a8aa2-7d5a-439f-a0b4-7b0586345d8a" perf_n_models[8][0].describe() # + papermill={"duration": 0.208526, "end_time": "2021-09-15T14:41:32.558468", "exception": false, "start_time": "2021-09-15T14:41:32.349942", "status": "completed"} id="4f035b9c" outputId="2a0c4844-1a4f-43ab-ddf7-b91c519a6847" #(None, False, True), perf_n_models[9][0] # + papermill={"duration": 0.223188, "end_time": "2021-09-15T14:41:32.979485", "exception": false, "start_time": "2021-09-15T14:41:32.756297", "status": "completed"} id="a566bbfe" outputId="0cbcdac9-5696-4a90-d4e1-999bfe2ca4bd" perf_n_models[9][0].describe() # + papermill={"duration": 0.207982, "end_time": "2021-09-15T14:41:33.384718", "exception": false, "start_time": "2021-09-15T14:41:33.176736", "status": "completed"} id="6554e2cf" outputId="cc48df28-0f67-43c3-faa3-c3ae54a3ba3c" #(None, True, False), perf_n_models[10][0] # + papermill={"duration": 0.222559, "end_time": "2021-09-15T14:41:33.806891", "exception": false, "start_time": "2021-09-15T14:41:33.584332", "status": "completed"} id="a9a6afb2" outputId="452d3737-42d3-4d3c-d2d8-f218245fb7e4" perf_n_models[10][0].describe() # + papermill={"duration": 0.21355, "end_time": "2021-09-15T14:41:34.224160", "exception": false, "start_time": "2021-09-15T14:41:34.010610", "status": "completed"} id="b5ecea68" outputId="f84f71df-05e3-494f-b302-ab3d349a4404" #(None, True, True)] perf_n_models[11][0] # + papermill={"duration": 0.222283, "end_time": "2021-09-15T14:41:34.648344", "exception": false, "start_time": "2021-09-15T14:41:34.426061", "status": "completed"} id="c3aa1767" outputId="9b954ad0-eda7-4814-c859-978bd9c9438f" perf_n_models[11][0].describe() # + papermill={"duration": 0.204644, "end_time": "2021-09-15T14:41:35.049058", "exception": false, "start_time": "2021-09-15T14:41:34.844414", "status": "completed"} id="6f805487" # ('log', False, True) mean: 0.700372, 0.730750; min: 0.611770, 0.629183; max: 0.748286, 0.779441 # (None, True, True) mean: 0.702960, 0.733629; min: 0.650990, 0.662221; max: 0.762657, 0.781433 # + [markdown] papermill={"duration": 0.200241, "end_time": "2021-09-15T14:41:35.452722", "exception": false, "start_time": "2021-09-15T14:41:35.252481", "status": "completed"} id="471b5390" # **So now we will try to increase models performance of two different approaches:** # 1. Log features transformation + with ID column + discrete encoding for categorical features # 2. No outliers treatment + whithout ID column + discrete encoding for categorical features # + [markdown] papermill={"duration": 0.197023, "end_time": "2021-09-15T14:41:35.849340", "exception": false, "start_time": "2021-09-15T14:41:35.652317", "status": "completed"} id="fabaf6b4" # # Building Ensemble Classifiers # + [markdown] papermill={"duration": 0.203243, "end_time": "2021-09-15T14:41:36.251551", "exception": false, "start_time": "2021-09-15T14:41:36.048308", "status": "completed"} id="e92c1292" # ## Blending # + [markdown] papermill={"duration": 0.201602, "end_time": "2021-09-15T14:41:36.654255", "exception": false, "start_time": "2021-09-15T14:41:36.452653", "status": "completed"} id="01bc23d8" # * Stacking-type ensemble where the meta-model is trained on predictions made on a holdout dataset. # + papermill={"duration": 0.204585, "end_time": "2021-09-15T14:41:37.060417", "exception": false, "start_time": "2021-09-15T14:41:36.855832", "status": "completed"} id="e871c206" def df_for_blend(models, models_name, tr_X, te_X, tr_y, te_y): tr_blend_df = pd.DataFrame(columns=models_name) te_blend_df = pd.DataFrame(columns=models_name) for model, name in zip(models, models_name): model.fit(tr_X, tr_y) train_pred = model.predict(tr_X) test_pred = model.predict(te_X) tr_blend_df[name] = train_pred te_blend_df[name] = test_pred return tr_blend_df, te_blend_df def blend_clf(tr_blend_df, te_blend_df, tr_y, te_y): lr = LogisticRegression(solver='liblinear', random_state=17) lr.fit(tr_blend_df, tr_y) return lr # + [markdown] papermill={"duration": 0.197273, "end_time": "2021-09-15T14:41:37.454682", "exception": false, "start_time": "2021-09-15T14:41:37.257409", "status": "completed"} id="ab517cae" # ### first approach # + papermill={"duration": 0.21, "end_time": "2021-09-15T14:41:37.860568", "exception": false, "start_time": "2021-09-15T14:41:37.650568", "status": "completed"} id="a902a66d" outputId="64956fda-f65d-466d-b5fd-6b7d41755b09" first_approach = perf_n_models[1][0].copy() # ('log', False, True) first_approach # + papermill={"duration": 0.272501, "end_time": "2021-09-15T14:41:38.328058", "exception": false, "start_time": "2021-09-15T14:41:38.055557", "status": "completed"} id="3096376b" X1_scaled, y1 = data_prep('log', False, True) X1_train, X1_test, y1_train, y1_test = train_test_split(X1_scaled, y1, test_size=0.33, random_state=17) # + papermill={"duration": 0.987846, "end_time": "2021-09-15T14:41:39.541330", "exception": false, "start_time": "2021-09-15T14:41:38.553484", "status": "completed"} id="bf8d1a66" outputId="e1a1c2b5-de36-4e3e-de86-27e698dd7166" models = [perf_n_models[1][1]['LogReg_CV'], perf_n_models[1][1]['KNN-CV'], perf_n_models[1][1]['SVM'], perf_n_models[1][1]['RandomForest'], perf_n_models[1][1]['GradBoost_CV']] model_names = ['LogReg_CV', 'KNN-CV', 'SVM', 'RandomForest', 'GradBoost_CV'] tr_blend_df, te_blend_df = df_for_blend(models=models, models_name=model_names, tr_X=X1_train, te_X=X1_test, tr_y=y1_train, te_y=y1_test) tr_blend_df # + [markdown] papermill={"duration": 0.195339, "end_time": "2021-09-15T14:41:39.966323", "exception": false, "start_time": "2021-09-15T14:41:39.770984", "status": "completed"} id="7bea0e18" # * Are models correlated? # + papermill={"duration": 0.468163, "end_time": "2021-09-15T14:41:40.631464", "exception": false, "start_time": "2021-09-15T14:41:40.163301", "status": "completed"} id="fbfd19d6" outputId="83d99c4f-d14b-47b5-c5df-f4c5ac4e821d" plt.figure(figsize=(12, 8)) sns.heatmap(tr_blend_df.corr()); # + [markdown] papermill={"duration": 0.197267, "end_time": "2021-09-15T14:41:41.024228", "exception": false, "start_time": "2021-09-15T14:41:40.826961", "status": "completed"} id="68c8e4f7" # Gradient booosting has 1.0 corr with each model. So lets delete it. # + papermill={"duration": 0.204735, "end_time": "2021-09-15T14:41:41.428361", "exception": false, "start_time": "2021-09-15T14:41:41.223626", "status": "completed"} id="79a28a51" tr_blend_df, te_blend_df = tr_blend_df.drop('GradBoost_CV', axis=1), te_blend_df.drop('GradBoost_CV', axis=1) # + papermill={"duration": 0.239165, "end_time": "2021-09-15T14:41:41.863791", "exception": false, "start_time": "2021-09-15T14:41:41.624626", "status": "completed"} id="8da187f7" outputId="b697bc30-d2d0-4457-f054-a26d7a31dfff" blend = blend_clf(tr_blend_df, te_blend_df, y1_train, y1_test) first_approach = add_performance_to_df(perf_n_models[1][0], 'BlendingClassifier', blend, tr_blend_df, y1_train, te_blend_df, y1_test) first_approach # + [markdown] papermill={"duration": 0.196964, "end_time": "2021-09-15T14:41:42.258922", "exception": false, "start_time": "2021-09-15T14:41:42.061958", "status": "completed"} id="5e1c9610" # **As we can see Blending does not improve score** # + [markdown] papermill={"duration": 0.196566, "end_time": "2021-09-15T14:41:42.650509", "exception": false, "start_time": "2021-09-15T14:41:42.453943", "status": "completed"} id="2fdb1856" # ### second approach # + papermill={"duration": 0.212793, "end_time": "2021-09-15T14:41:43.058542", "exception": false, "start_time": "2021-09-15T14:41:42.845749", "status": "completed"} id="aded0631" outputId="2fad288d-f3d0-4800-df5a-936a1d58e1e2" second_approach = perf_n_models[11][0] #(None, True, True)] second_approach # + papermill={"duration": 0.263147, "end_time": "2021-09-15T14:41:43.520785", "exception": false, "start_time": "2021-09-15T14:41:43.257638", "status": "completed"} id="931b944e" X2_scaled, y2 = data_prep(None, True, True) X2_train, X2_test, y2_train, y2_test = train_test_split(X2_scaled, y2, test_size=0.33, random_state=17) # + papermill={"duration": 1.045551, "end_time": "2021-09-15T14:41:44.800899", "exception": false, "start_time": "2021-09-15T14:41:43.755348", "status": "completed"} id="209531e3" outputId="99222dc5-2e73-4341-d0ce-c3b64f489bfc" models = [perf_n_models[11][1]['LogReg'], perf_n_models[11][1]['KNN-CV'], perf_n_models[11][1]['SVM'], perf_n_models[11][1]['RandomForest_CV'], perf_n_models[11][1]['GradBoost_CV']] model_names = ['LogReg', 'KNN-CV', 'SVM', 'RandomForest_CV', 'GradBoost_CV'] tr_blend_df, te_blend_df = df_for_blend(models=models, models_name=model_names, tr_X=X2_train, te_X=X2_test, tr_y=y2_train, te_y=y2_test) tr_blend_df # + papermill={"duration": 0.458032, "end_time": "2021-09-15T14:41:45.456767", "exception": false, "start_time": "2021-09-15T14:41:44.998735", "status": "completed"} id="a550a1eb" outputId="d20b1cb6-c7a7-400f-d35a-6674f3833120" plt.figure(figsize=(12, 8)) sns.heatmap(tr_blend_df.corr()); # + [markdown] papermill={"duration": 0.197006, "end_time": "2021-09-15T14:41:45.854402", "exception": false, "start_time": "2021-09-15T14:41:45.657396", "status": "completed"} id="3ea3736a" # **In second approach gradboosting also highly corr** # + papermill={"duration": 0.210718, "end_time": "2021-09-15T14:41:46.262363", "exception": false, "start_time": "2021-09-15T14:41:46.051645", "status": "completed"} id="9ab06796" tr_blend_df, te_blend_df = tr_blend_df.drop('GradBoost_CV', axis=1), te_blend_df.drop('GradBoost_CV', axis=1) # + papermill={"duration": 0.241111, "end_time": "2021-09-15T14:41:46.699877", "exception": false, "start_time": "2021-09-15T14:41:46.458766", "status": "completed"} id="cdac5ddb" outputId="b77552ae-6cc1-4756-bd7b-3d0760442727" blend = blend_clf(tr_blend_df, te_blend_df, y2_train, y2_test) second_approach = add_performance_to_df(second_approach, 'BlendingClassifier', blend, tr_blend_df, y2_train, te_blend_df, y2_test) second_approach # + [markdown] papermill={"duration": 0.199526, "end_time": "2021-09-15T14:41:47.096574", "exception": false, "start_time": "2021-09-15T14:41:46.897048", "status": "completed"} id="a97801fe" # **Here also blending performs very bad** # + [markdown] papermill={"duration": 0.197657, "end_time": "2021-09-15T14:41:47.495070", "exception": false, "start_time": "2021-09-15T14:41:47.297413", "status": "completed"} id="8cb0ea56" # ## Stacking # + [markdown] papermill={"duration": 0.20167, "end_time": "2021-09-15T14:41:47.894469", "exception": false, "start_time": "2021-09-15T14:41:47.692799", "status": "completed"} id="3b753df4" # * Stacking-type ensemble where the meta-model is trained on out-of-fold predictions made during k-fold cross-validation # + papermill={"duration": 0.207601, "end_time": "2021-09-15T14:41:48.303965", "exception": false, "start_time": "2021-09-15T14:41:48.096364", "status": "completed"} id="9acb13b3" def stacking_clf(models, model_names, tr_X, tr_y): level0 = list() for model, name in zip(models, model_names): level0.append((name, model)) level1 = LogisticRegression(C=0.001, class_weight='balanced', random_state=17, solver='liblinear') stcl = StackingClassifier(estimators=level0, final_estimator=level1, cv=5, stack_method='predict_proba', passthrough=True, n_jobs=-1) stcl.fit(tr_X, tr_y) return stcl # + [markdown] papermill={"duration": 0.195714, "end_time": "2021-09-15T14:41:48.696037", "exception": false, "start_time": "2021-09-15T14:41:48.500323", "status": "completed"} id="e23fd9c1" # ### First approach # + papermill={"duration": 5.455536, "end_time": "2021-09-15T14:41:54.349370", "exception": false, "start_time": "2021-09-15T14:41:48.893834", "status": "completed"} id="ad35c3ad" outputId="06ec1f46-0de2-4049-ae1d-e5a35d7eab59" model_names, models = list(perf_n_models[1][1].keys()), list(perf_n_models[1][1].values()), stacking = stacking_clf(models, model_names, X1_train, y1_train) first_approach = add_performance_to_df(first_approach, 'StackingClassifier', stacking, X1_train, y1_train, X1_test, y1_test) first_approach # + [markdown] papermill={"duration": 0.199244, "end_time": "2021-09-15T14:41:54.746535", "exception": false, "start_time": "2021-09-15T14:41:54.547291", "status": "completed"} id="920ae215" # **Again stacking does not change a thing** # + [markdown] papermill={"duration": 0.200895, "end_time": "2021-09-15T14:41:55.147423", "exception": false, "start_time": "2021-09-15T14:41:54.946528", "status": "completed"} id="425101ee" # ### Second Approach # + papermill={"duration": 5.333748, "end_time": "2021-09-15T14:42:00.680737", "exception": false, "start_time": "2021-09-15T14:41:55.346989", "status": "completed"} id="be887a67" outputId="3fba2924-3cb0-419f-c1e6-bc3c3eee21b5" model_names, models = list(perf_n_models[11][1].keys()), list(perf_n_models[11][1].values()), stacking = stacking_clf(models, model_names, X2_train, y2_train) second_approach = add_performance_to_df(second_approach, 'StackingClassifier', stacking, X2_train, y2_train, X2_test, y2_test) second_approach # + [markdown] papermill={"duration": 0.199378, "end_time": "2021-09-15T14:42:01.081383", "exception": false, "start_time": "2021-09-15T14:42:00.882005", "status": "completed"} id="1af89780" # **Here stacking shows mid score** # + [markdown] papermill={"duration": 0.198466, "end_time": "2021-09-15T14:42:01.482727", "exception": false, "start_time": "2021-09-15T14:42:01.284261", "status": "completed"} id="d2a96848" # ## Bagging # + [markdown] papermill={"duration": 0.198797, "end_time": "2021-09-15T14:42:01.915327", "exception": false, "start_time": "2021-09-15T14:42:01.716530", "status": "completed"} id="91f467dc" # ### First approach # + papermill={"duration": 5.094621, "end_time": "2021-09-15T14:42:07.210569", "exception": false, "start_time": "2021-09-15T14:42:02.115948", "status": "completed"} id="40d36f7a" outputId="fcc3dde9-4bca-4a4b-9e12-45a5a9e29ee5" bagg = BaggingClassifier(base_estimator=perf_n_models[1][1]['GradBoost_CV'], random_state=17, bootstrap=True) bagg.fit(X1_train, y1_train) first_approach = add_performance_to_df(first_approach, 'BaggingClassifier(GradBoost)', bagg, X1_train, y1_train, X1_test, y1_test) first_approach # + [markdown] papermill={"duration": 0.198254, "end_time": "2021-09-15T14:42:07.609697", "exception": false, "start_time": "2021-09-15T14:42:07.411443", "status": "completed"} id="bd2bde74" # **Bagging became as good as a simple gradient boosting** # + [markdown] papermill={"duration": 0.200893, "end_time": "2021-09-15T14:42:08.009051", "exception": false, "start_time": "2021-09-15T14:42:07.808158", "status": "completed"} id="22c4339c" # ### Second approach # + papermill={"duration": 2.185519, "end_time": "2021-09-15T14:42:10.396162", "exception": false, "start_time": "2021-09-15T14:42:08.210643", "status": "completed"} id="abe7094b" outputId="86e5fedb-6993-409f-f190-d0e9fab12c7a" bagg = BaggingClassifier(base_estimator=perf_n_models[11][1]['RandomForest_CV'], random_state=17, bootstrap=True) bagg.fit(X2_train, y2_train) second_approach = add_performance_to_df(second_approach, 'BaggingClassifier(RandomForest_CV)', bagg, X2_train, y2_train, X2_test, y2_test) second_approach # + [markdown] papermill={"duration": 0.199132, "end_time": "2021-09-15T14:42:10.797677", "exception": false, "start_time": "2021-09-15T14:42:10.598545", "status": "completed"} id="20775582" # **So none of ensemble model helped to increase score** # + [markdown] papermill={"duration": 0.201171, "end_time": "2021-09-15T14:42:11.202092", "exception": false, "start_time": "2021-09-15T14:42:11.000921", "status": "completed"} id="dbe0170a" # ### So the best score has RF_CV without any tratment of outliers, whithout ID column and with discrete encoding for categorical features # + [markdown] papermill={"duration": 0.204196, "end_time": "2021-09-15T14:42:11.607710", "exception": false, "start_time": "2021-09-15T14:42:11.403514", "status": "completed"} id="d0aa70c7" # # Feature importance # + papermill={"duration": 0.208029, "end_time": "2021-09-15T14:42:12.015927", "exception": false, "start_time": "2021-09-15T14:42:11.807898", "status": "completed"} id="ac451d74" best_first_ap_model = perf_n_models[1][1]['GradBoost_CV'] the_best_second_ap_model = perf_n_models[11][1]['RandomForest_CV'] # + papermill={"duration": 0.603031, "end_time": "2021-09-15T14:42:12.820145", "exception": false, "start_time": "2021-09-15T14:42:12.217114", "status": "completed"} id="f84c188a" outputId="b29f00d3-d213-4813-c1a0-ba7615d2da5e" plt.figure(figsize=(12, 8)) rf_indices = np.argsort(the_best_second_ap_model.feature_importances_)[::-1] ax2 = sns.barplot(y=df.drop('ID', axis=1).columns[rf_indices], x = best_first_ap_model.feature_importances_[rf_indices] , orient='h') ax2.set_xlabel('Importance', fontsize=12) ax2.set_ylabel('Features', fontsize=12) ax2.set_title('Second approach RF-CV (The best!)'); # + papermill={"duration": 0.519522, "end_time": "2021-09-15T14:42:13.545730", "exception": false, "start_time": "2021-09-15T14:42:13.026208", "status": "completed"} id="3b92f5cf" outputId="a3726ef8-3a47-4e00-e068-b9bd75853d27" plt.figure(figsize=(12, 8)) gb_indices = np.argsort(best_first_ap_model.feature_importances_)[::-1] ax1 = sns.barplot(y=df.columns[gb_indices], x = best_first_ap_model.feature_importances_[gb_indices] , orient='h') ax1.set_xlabel('Importance', fontsize=12) ax1.set_ylabel('Features', fontsize=12) ax1.set_title('First approach GB-CV (Not the best)') # + papermill={"duration": 0.203646, "end_time": "2021-09-15T14:42:13.953325", "exception": false, "start_time": "2021-09-15T14:42:13.749679", "status": "completed"} id="c7dc40cf"
52,703
/Untitled.ipynb
51dc2e2a9175ad1d43ee9fedd5528d000d6f5e55
[]
no_license
chedellasubbu/assignment
https://github.com/chedellasubbu/assignment
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
15,104
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib notebook import matplotlib import matplotlib.pyplot as plt import pandas as pd import numpy as np housing_df = pd.read_csv("Resources/DC_Properties.csv") housing_df2 = housing_df.drop(["Unnamed: 0", "BLDG_NUM", "NATIONALGRID", "X", "Y"], axis=1) housing_df2['YEAR'] = pd.DatetimeIndex(housing_df2['SALEDATE']).year housing_df2['MONTH'] = pd.DatetimeIndex(housing_df2['SALEDATE']).month hdf3 = housing_df2.loc[housing_df2['SOURCE'] == 'Residential', : ] hdf4 = hdf3.dropna(subset=['PRICE']) hdf5 = hdf4.loc[hdf4['YEAR']> 1991, : ] hdf6 = hdf5.loc[hdf5['YEAR']< 2018, : ] #hdf6.to_csv('HousingDataTrim.csv') housing = hdf6 housing.columns housing_df.dtypes.head(40) salecount = housing.groupby('WARD')['SALEDATE'].count() salecount # + wardd1 = housing.loc[housing['WARD'] == 'Ward 1', ['WARD','SALEDATE','YEAR']] ward1 = wardd1.groupby('YEAR')['SALEDATE'].count() wardd2 = housing.loc[housing['WARD'] == 'Ward 2', ['WARD','SALEDATE','YEAR']] ward2 = wardd2.groupby('YEAR')['SALEDATE'].count() wardd3 = housing.loc[housing['WARD'] == 'Ward 3', ['WARD','SALEDATE','YEAR']] ward3 = wardd3.groupby('YEAR')['SALEDATE'].count() wardd4 = housing.loc[housing['WARD'] == 'Ward 4', ['WARD','SALEDATE','YEAR']] ward4 = wardd4.groupby('YEAR')['SALEDATE'].count() wardd5 = housing.loc[housing['WARD'] == 'Ward 5', ['WARD','SALEDATE','YEAR']] ward5 = wardd5.groupby('YEAR')['SALEDATE'].count() wardd6 = housing.loc[housing['WARD'] == 'Ward 6', ['WARD','SALEDATE','YEAR']] ward6 = wardd6.groupby('YEAR')['SALEDATE'].count() wardd7 = housing.loc[housing['WARD'] == 'Ward 7', ['WARD','SALEDATE','YEAR']] ward7 = wardd7.groupby('YEAR')['SALEDATE'].count() wardd8 = housing.loc[housing['WARD'] == 'Ward 8', ['WARD','SALEDATE','YEAR']] ward8 = wardd8.groupby('YEAR')['SALEDATE'].count() # - apw = housing.groupby('WARD')['PRICE'].mean().round(2) apw # + bar_chart = apw.plot(kind="bar", title='Average Sales Price by Ward') plt.grid() plt.show() # + #plt.clf() IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7CgpmdW5jdGlvbiBfdXBsb2FkRmlsZXMoaW5wdXRJZCwgb3V0cHV0SWQpIHsKICBjb25zdCBzdGVwcyA9IHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCk7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICAvLyBDYWNoZSBzdGVwcyBvbiB0aGUgb3V0cHV0RWxlbWVudCB0byBtYWtlIGl0IGF2YWlsYWJsZSBmb3IgdGhlIG5leHQgY2FsbAogIC8vIHRvIHVwbG9hZEZpbGVzQ29udGludWUgZnJvbSBQeXRob24uCiAgb3V0cHV0RWxlbWVudC5zdGVwcyA9IHN0ZXBzOwoKICByZXR1cm4gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpOwp9CgovLyBUaGlzIGlzIHJvdWdobHkgYW4gYXN5bmMgZ2VuZXJhdG9yIChub3Qgc3VwcG9ydGVkIGluIHRoZSBicm93c2VyIHlldCksCi8vIHdoZXJlIHRoZXJlIGFyZSBtdWx0aXBsZSBhc3luY2hyb25vdXMgc3RlcHMgYW5kIHRoZSBQeXRob24gc2lkZSBpcyBnb2luZwovLyB0byBwb2xsIGZvciBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcC4KLy8gVGhpcyB1c2VzIGEgUHJvbWlzZSB0byBibG9jayB0aGUgcHl0aG9uIHNpZGUgb24gY29tcGxldGlvbiBvZiBlYWNoIHN0ZXAsCi8vIHRoZW4gcGFzc2VzIHRoZSByZXN1bHQgb2YgdGhlIHByZXZpb3VzIHN0ZXAgYXMgdGhlIGlucHV0IHRvIHRoZSBuZXh0IHN0ZXAuCmZ1bmN0aW9uIF91cGxvYWRGaWxlc0NvbnRpbnVlKG91dHB1dElkKSB7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICBjb25zdCBzdGVwcyA9IG91dHB1dEVsZW1lbnQuc3RlcHM7CgogIGNvbnN0IG5leHQgPSBzdGVwcy5uZXh0KG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSk7CiAgcmV0dXJuIFByb21pc2UucmVzb2x2ZShuZXh0LnZhbHVlLnByb21pc2UpLnRoZW4oKHZhbHVlKSA9PiB7CiAgICAvLyBDYWNoZSB0aGUgbGFzdCBwcm9taXNlIHZhbHVlIHRvIG1ha2UgaXQgYXZhaWxhYmxlIHRvIHRoZSBuZXh0CiAgICAvLyBzdGVwIG9mIHRoZSBnZW5lcmF0b3IuCiAgICBvdXRwdXRFbGVtZW50Lmxhc3RQcm9taXNlVmFsdWUgPSB2YWx1ZTsKICAgIHJldHVybiBuZXh0LnZhbHVlLnJlc3BvbnNlOwogIH0pOwp9CgovKioKICogR2VuZXJhdG9yIGZ1bmN0aW9uIHdoaWNoIGlzIGNhbGxlZCBiZXR3ZWVuIGVhY2ggYXN5bmMgc3RlcCBvZiB0aGUgdXBsb2FkCiAqIHByb2Nlc3MuCiAqIEBwYXJhbSB7c3RyaW5nfSBpbnB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIGlucHV0IGZpbGUgcGlja2VyIGVsZW1lbnQuCiAqIEBwYXJhbSB7c3RyaW5nfSBvdXRwdXRJZCBFbGVtZW50IElEIG9mIHRoZSBvdXRwdXQgZGlzcGxheS4KICogQHJldHVybiB7IUl0ZXJhYmxlPCFPYmplY3Q+fSBJdGVyYWJsZSBvZiBuZXh0IHN0ZXBzLgogKi8KZnVuY3Rpb24qIHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IGlucHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKGlucHV0SWQpOwogIGlucHV0RWxlbWVudC5kaXNhYmxlZCA9IGZhbHNlOwoKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIG91dHB1dEVsZW1lbnQuaW5uZXJIVE1MID0gJyc7CgogIGNvbnN0IHBpY2tlZFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgaW5wdXRFbGVtZW50LmFkZEV2ZW50TGlzdGVuZXIoJ2NoYW5nZScsIChlKSA9PiB7CiAgICAgIHJlc29sdmUoZS50YXJnZXQuZmlsZXMpOwogICAgfSk7CiAgfSk7CgogIGNvbnN0IGNhbmNlbCA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2J1dHRvbicpOwogIGlucHV0RWxlbWVudC5wYXJlbnRFbGVtZW50LmFwcGVuZENoaWxkKGNhbmNlbCk7CiAgY2FuY2VsLnRleHRDb250ZW50ID0gJ0NhbmNlbCB1cGxvYWQnOwogIGNvbnN0IGNhbmNlbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCkgewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwogICAgICBwZXJjZW50LnRleHRDb250ZW50ID0KICAgICAgICAgIGAke01hdGgucm91bmQoKHBvc2l0aW9uIC8gZmlsZURhdGEuYnl0ZUxlbmd0aCkgKiAxMDApfSUgZG9uZWA7CiAgICB9CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 72} outputId="93e871b9-532e-4b3b-b5a8-f88d5d5c7c30" from google.colab import files import pandas as pd data_to_load = files.upload() # + id="fVW9i7UZ0pSm" colab_type="code" colab={} import numpy as np import matplotlib.pyplot as plt import seaborn as sns # + id="eOjQ4FdP07Qw" colab_type="code" colab={} import io df = pd.read_csv(io.BytesIO(data_to_load['kc_house_data.csv'])) # + [markdown] id="Nk68Ba8G23y9" colab_type="text" # ##### Exploratory Data Analysis # + id="WByuPTBP20Lr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 391} outputId="a404c0c1-1955-42c7-8075-a41fb88fedcf" df.isnull().sum() # + id="QgnEGTzW20mR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 669} outputId="5240c325-7d41-44dc-ecee-3efcbfdad32b" df.describe().transpose() # + id="KS9WFgLP3SGk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 416} outputId="9459dd46-32be-4b8a-c0ff-7da71a2f042f" plt.figure(figsize=(10,6)) sns.distplot(df['price']) # + [markdown] id="9-LUx0FO3o0H" colab_type="text" # From the above distribution, the houses price falls from 0 to around 1.5 million # + id="SHRoAj5F3oED" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="345fb1b9-e4f9-4b24-e26e-cf89d1f83b02" sns.countplot(df['price']) # + id="IyvehhVO3SVY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="11062c35-e7eb-40b5-d0bd-2d2a3f0386d5" corrmat = df.corr() numer_corr = corrmat['price'].abs().sort_values() numer_corr # + id="DSF8cwRO3SZ1" colab_type="code" colab={} str_cormat= numer_corr[numer_corr>0.4] # + id="vch6-czV3Sjp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 340} outputId="ff0697d6-bb52-48b2-893e-bfa23bbe3220" strong_corr = corrmat[str_cormat.index].corr() sns.heatmap(strong_corr) # + [markdown] id="anD2Oj_D52W3" colab_type="text" # Few features are playing important role like bathrooms,sqft_living,grade .The potential collinearity between some of these feature columns. Collinearity is when 2 feature columns are highly correlated and stand the risk of duplicating information. If we have 2 features that convey the same information using 2 different measures or metrics, we don't need to keep both. # + id="Ug-B1itU3Srv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 405} outputId="9b803472-6baf-4a73-cac5-7b70f2d94db9" plt.figure(figsize=(10,6)) sns.scatterplot(x='price',y='sqft_living',data=df) # + [markdown] id="smSJVDM1-vpJ" colab_type="text" # #### Geographical Properties # # The price also varies depends upon the latitude and longitude # + id="Vo8HAYVh-scw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 514} outputId="deaeb045-7c81-4eb8-d465-5869534dcef1" plt.figure(figsize=(12,8)) sns.scatterplot(x='price',y='long',data=df) # + id="WZ7BBZKN-4qb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 514} outputId="3b0d2b7d-f535-41be-f0e0-ace10b73630c" plt.figure(figsize=(12,8)) sns.scatterplot(x='price',y='long',data=df) # + id="0PvTAWNM_TjF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 514} outputId="a8c94280-cf13-411d-fa57-4bda12965327" plt.figure(figsize=(12,8)) sns.scatterplot(x='long',y='lat',data=df,hue='price') # + id="JJrayYFS_y0g" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 689} outputId="eb30afe5-51a8-468c-bda0-a84c56699d67" df.sort_values('price',ascending=False).head(20) # + [markdown] id="0xeNaFmqnMgP" colab_type="text" # The above list is for the most expensive s. Around 1% of the homes are have high sale prices # + id="L7V8GH_vnLX8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4cbb78b1-5f16-4db6-f614-f5b47cee4d43" len(df)*(0.01) # + [markdown] id="WZkWx85foNsg" colab_type="text" # Nearly 216 houses are having high sale prices. # + id="blgypSEsp2FZ" colab_type="code" colab={} non_top_1_perc = df.sort_values('price',ascending=False).iloc[216:] # + id="RGsSvgNqoBPM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 514} outputId="9b9c13ec-bf4e-4735-9b57-83ab0f234bf0" plt.figure(figsize=(12,8)) sns.scatterplot(x='long', y='lat',data=non_top_1_perc, hue='price',palette="RdYlGn",edgecolor=None,alpha=0.2) # + [markdown] id="CefynRiXqqZI" colab_type="text" # Seems waterfont properties are expensive . Let's visualize these houses with the help of using boxplot # + id="ImTUVV4X_3z-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 416} outputId="0542ce1a-95af-4e7e-bae6-a08983196897" plt.figure(figsize=(10,6)) sns.boxplot(x='waterfront',y='price',data=df) # + id="b0HwvtZUtP7f" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="d966c9d3-4685-42b5-cef5-ddb38ba7d333" df.head() # + id="7HM6ON5CtScz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="9d0bfbed-bcdd-4a67-ac37-42c9cf634d78" df.corr()['zipcode'].abs().sort_values() # + [markdown] id="HamW06K7zg1I" colab_type="text" # Feature Engineering # + id="OhP_T1jmzeCJ" colab_type="code" colab={} #changing date to datetime df['date']=pd.to_datetime(df['date']) # + id="v45J-__IzeI7" colab_type="code" colab={} df['month']=df['date'].apply(lambda date: date.month) # + id="KocoZQbQ0Ram" colab_type="code" colab={} df['year'] = df['date'].apply(lambda date:date.year) # + id="yEJOSbvhS9jy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 307} outputId="f8347a15-f35b-43f5-a610-b4cc3d9df4db" sns.boxplot(x='year',y='price',data=df) # + id="L7pN-OMdTQ7s" colab_type="code" colab={} df = df.drop('date',axis=1) # + id="jTOA9ekYTizS" colab_type="code" colab={} df = df.drop('zipcode',axis=1) # + id="lywQCbliTozG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="ca71628e-c8c1-4515-87ed-e997440624fe" df.columns # + [markdown] id="dNv-TjQiekOO" colab_type="text" # #### Scaling and Train Test Split # + id="Qv75uf1dcfjV" colab_type="code" colab={} X= df.drop('price',axis=1).values y = df['price'].values # + id="AHf02MF2e1iZ" colab_type="code" colab={} from sklearn.model_selection import train_test_split # + id="6FErL5v3e1uM" colab_type="code" colab={} X_train, X_test, y_train, y_test=train_test_split(X, y, test_size=0.33, random_state=101) # + id="EQVKFUVWe18n" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="cfd15380-38f1-4caa-86e7-0a3638966e32" X_train.shape # + id="mn9blzwte1re" colab_type="code" colab={} from sklearn.preprocessing import MinMaxScaler # + [markdown] id="BiVaxr8aiRD7" colab_type="text" # Going to fit and transform by using fit_tranform() method on the train data so that we learn the parameters of scaling on the train data and in the same time we scale the train data. # + id="9DXHBgAae1oZ" colab_type="code" colab={} scaler= MinMaxScaler() X_train = scaler.fit_transform(X_train) # + [markdown] id="a52zEbp1iD0J" colab_type="text" # We only use transform() on the test data because we use the scaling paramaters learned on the train data to scale the test data. # We dont fit our test data set because we dont want to assume prior information about our test set. # + id="XhmvGadZhf5Q" colab_type="code" colab={} X_test =scaler.transform(X_test) # + id="qD9dptTXjXf4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5f98eb3b-f728-430c-e440-c9d71edad932" X_train.shape # + id="4AjOW9g2jXvw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e1fb7bbd-4684-452c-aaa9-52dfdc837126" X_test.shape # + id="jRx0qcPMjYA2" colab_type="code" colab={} from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.optimizers import Adam # + id="iyfPubmojX82" colab_type="code" colab={} model = Sequential() model.add(Dense(18,activation='relu')) model.add(Dense(18,activation='relu')) model.add(Dense(18,activation='relu')) model.add(Dense(18,activation='relu')) model.add(Dense(1)) model.compile(optimizer='adam',loss='mse') # + [markdown] id="Eb5GExp7ltPd" colab_type="text" # Training the Model # + id="79ANQhLsjXsF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="acf76a3d-8f3b-41b0-8aab-c901022fbf72" model.fit(x=X_train,y=y_train.values, validation_data=(X_test,y_test.values), batch_size=128,epochs=400) # + id="stvgTRnUnAGV" colab_type="code" colab={} losses=pd.DataFrame(model.history.history) # + [markdown] id="iDjmRVvEpOXg" colab_type="text" # history returns a dictionary by conveting this dictionary into dataframe we can compare the loss and validation_loss data inorder to see the data is overfitting # + id="eL7q-63TnAWC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 293} outputId="22d783c4-8c30-4d7a-e185-3d99df14d4db" losses.plot() # + [markdown] id="dGrV9em1qmu2" colab_type="text" # By comparing the plot behaviour it decreasing in both the train loss and validation loss.This data is not overfitting,because of the data is going down and continue down together # + [markdown] id="n_wwPbWcrcr3" colab_type="text" # Evaluation on Test Data # + id="7w4eSmvEqlxu" colab_type="code" colab={} from sklearn.metrics import mean_squared_error,mean_absolute_error,explained_variance_score # + id="Q8YJoghfqXz7" colab_type="code" colab={} predictions= model.predict(X_test) # + id="zj8Csio4jXor" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a6f59810-1625-4490-c1f6-baf1fc8a1c69" mean_absolute_error(y_test,predictions) # + id="6-u9HggktvGb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ad217285-1a1b-4777-dc1d-278fccf9c8a8" np.sqrt(mean_squared_error(y_test,predictions)) # + id="-SgXEvcDtvXM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d9eaeffe-e03e-4053-c7d0-fd065f8a963a" df['price'].mean() # + [markdown] id="dsW5k0EMuDJl" colab_type="text" # From the MAE the price is off around 18% of the original price # + [markdown] id="ESTT9itfuVjx" colab_type="text" # Best possible score for explained variance is 1 # + id="IETEq9Qyt5kb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fb512008-1164-4d08-c559-b744869e0915" explained_variance_score(y_test,predictions) # + id="qBOzZ6dDu8MY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="01248a66-105b-4d76-d424-eab5d3112df0" # Our predictions plt.scatter(y_test,predictions) # Perfect predictions plt.plot(y_test,y_test,'r') # + id="Hl23_Sg_wPXI" colab_type="code" colab={}
20,152
/asl_recognizer.ipynb
3cd694879a9c29ceb4f731a924507d98802372d1
[]
no_license
mcwata7/AIND-Recognizer
https://github.com/mcwata7/AIND-Recognizer
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
244,449
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/OUCTheoryGroup/colab_demo/blob/master/02_Unsupervised_Segmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="GlqL82RXiqOv" colab_type="text" # Unsupervised Image Segmentation. *ICASSP* 2018 # # **图片无监督语义分割**,作者是东京大学的 Asako Kanezaki ,这里采用的是曾伊言修改的代码。 # # GITHUB地址:https://github.com/Yonv1943/Unsupervised-Segmentation/tree/master # # 知乎链接:https://zhuanlan.zhihu.com/p/68528056 # # 原作者的算法要运行30秒左右,这里的代码只需要5秒钟就可以取得相同的效果。 # # + id="Am6_ZPctgeQ_" colab_type="code" colab={} # 首先:下载待处理的图像,这里选择的是 tiger.jpg 这张图 # ! wget https://raw.githubusercontent.com/Yonv1943/Unsupervised-Segmentation/master/image/tiger.jpg # + id="SlNovmExgw4_" colab_type="code" colab={} import os import time import cv2 import numpy as np from skimage import segmentation import torch import torch.nn as nn from matplotlib import pyplot as plt # + [markdown] id="H6Tq9iXnqzKS" colab_type="text" # 论文的总体框架如下: # # ![alt text](https://raw.githubusercontent.com/summitgao/ImageGallery/master/20191116182147.jpg) # # 完整算法如下: # # ![](https://raw.githubusercontent.com/summitgao/ImageGallery/master/20191116180641.jpg) # # 其中,$Net()$ 为作者使用的一个全卷积网络,接收输入图像进行特征提取,该网络由三层卷积组成,具体如下: # # | | kernel | dim | stride | padding | activation | # |:--:|:--:|:--:|:--:|:--:|:--:| # |conv2d| 3x3 | 100 | 1 | 1 | RelU, BatchNorm | # |conv2d| 3x3 | 100 | 1 | 1 | RelU, BatchNorm | # |conv2d| 1x1 | 100 | 1 | 1 | BatchNorm | # # 为了提高效率,曾伊言对网络进行了改进,使用四层卷积,仿照SENet ,使用3x3与1x1交替,膨胀64 与 压缩32。网络的实现代码如下: # # + id="4vPk0g5XgkjX" colab_type="code" colab={} class MyNet(nn.Module): def __init__(self, inp_dim, mod_dim1, mod_dim2): super(MyNet, self).__init__() self.seq = nn.Sequential( nn.Conv2d(inp_dim, mod_dim1, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(mod_dim1), nn.ReLU(inplace=True), nn.Conv2d(mod_dim1, mod_dim2, kernel_size=1, stride=1, padding=0), nn.BatchNorm2d(mod_dim2), nn.ReLU(inplace=True), nn.Conv2d(mod_dim2, mod_dim1, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(mod_dim1), nn.ReLU(inplace=True), nn.Conv2d(mod_dim1, mod_dim2, kernel_size=1, stride=1, padding=0), nn.BatchNorm2d(mod_dim2), ) def forward(self, x): return self.seq(x) # + [markdown] id="Ac3chp5Dx2-R" colab_type="text" # ## 1. 初始化参数 # # train_epoch 指定最大迭代 $2^6 = 64$ 个 epoch;inp_dim指输入图像是3通道; mod_dim1 和 mod_dim2 指网络为 64、32通道交替,因为是对原作者代码进行了修改,因此命名前加了 mod # + id="1fijMgtKzClf" colab_type="code" colab={} input_image_path = 'tiger.jpg' train_epoch = 2 ** 6 inp_dim = 3 mod_dim1 = 64 mod_dim2 = 32 gpu_id = 0 # if the label number small than it, break loop min_label_num = 4 # if the label number small than it, start to show result image. max_label_num = 256 start_time0 = time.time() torch.cuda.manual_seed_all(1943) np.random.seed(1943) os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_id) # choose GPU:0 image = cv2.imread(input_image_path) # + [markdown] id="taxA6TZFzb5K" colab_type="text" # ## 2. 超像素分割 # # 这里使用了Efficient Graph-Based Image Segmentation - Felzenszwalb (MIT)2004 基于图的超像素分割算法 (简称Felz算法)。具体细节不过多介绍。对于超像素分割,有两个算法,一个是 Felz算法,另一个是 SLIC 算法。论文原作者使用的是 SLIC 算法,曾伊言推荐使用 Felz 算法替代 SLIC 算法。具体原因在知乎专栏里说的比较清楚,这里不再介绍。 # + id="YNtxKIxtzhrQ" colab_type="code" colab={} seg_map = segmentation.felzenszwalb(image, scale=32, sigma=0.5, min_size=64) plt.imshow(seg_map) seg_map = seg_map.flatten() seg_lab = [np.where(seg_map == u_label)[0] for u_label in np.unique(seg_map)] # + [markdown] id="GjOMOfmuDHcY" colab_type="text" # 上面的代码首先进行超像素分割,分割结果保存在 seg_map 里。一共分割得到 616 个区域,各个区域像素的 index 保存在 seg_lab 数组里。 # # ## 3. 算法训练 # # 超像素分割的结果可以看作是**『预分类』**:相似颜色和纹理的像素保存相同的label。比如例子里的 tiger图,超像素分类得到616个区域,分别分配 0 至 615 的标签。 # # 使用上面提到的CNN,对输入图片进行分类,分类的目标是:使输出的分割结果,每一个超像素内分配的标签一致,训练到收敛。 # # 具体来说,把图像输入CNN得到一个图为 output,在 output 里,每个像素被分配一个 label (因为网络最后一层是32个 feature map,用 argmax 取值最大的那个为 label ,因此,label 的范围是 0 到 31)。统计每个超像素里像素的 label,以数量最多的为目标,放到一个 target 的图里,计划 output 和 target 间的交叉熵损失,然后反向传播。 # # 经过多轮训练,CNN会逐步实现具备相同语义信息的小区块合并,得到大区块。(代码设置里,当最终只剩下4个区域时,会停止迭代。) # # # + id="R8YSnkv6fwqu" colab_type="code" colab={} '''train init''' device = torch.device("cuda" if torch.cuda.is_available() else 'cpu') tensor = image.transpose((2, 0, 1)) tensor = tensor.astype(np.float32) / 255.0 tensor = tensor[np.newaxis, :, :, :] tensor = torch.from_numpy(tensor).to(device) model = MyNet(inp_dim, mod_dim1, mod_dim2).to(device) criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=5e-2, momentum=0.9) image_flatten = image.reshape((-1, 3)) color_avg = np.random.randint(255, size=(max_label_num, 3)) show = image '''train loop''' start_time1 = time.time() model.train() for batch_idx in range(train_epoch): '''forward''' optimizer.zero_grad() output = model(tensor)[0] output = output.permute(1, 2, 0).view(-1, mod_dim2) target = torch.argmax(output, 1) im_target = target.data.cpu().numpy() '''refine''' for inds in seg_lab: u_labels, hist = np.unique(im_target[inds], return_counts=True) im_target[inds] = u_labels[np.argmax(hist)] '''backward''' target = torch.from_numpy(im_target) target = target.to(device) loss = criterion(output, target) loss.backward() optimizer.step() '''show image''' un_label, lab_inverse = np.unique(im_target, return_inverse=True, ) if un_label.shape[0] < max_label_num: # update show img_flatten = image_flatten.copy() if len(color_avg) != un_label.shape[0]: color_avg = [np.mean(img_flatten[im_target == label], axis=0, dtype=np.int) for label in un_label] for lab_id, color in enumerate(color_avg): img_flatten[lab_inverse == lab_id] = color show = img_flatten.reshape(image.shape) print('Loss:', batch_idx, loss.item()) if len(un_label) < min_label_num: break '''save''' time1 = time.time() - start_time1 print('TimeUsed: %.2f' % time1) cv2.imwrite("seg_%s_%ds.jpg" % (input_image_path[6:-4], time1), show) plt.imshow(show) # + [markdown] id="M3Yr0m6DwKn0" colab_type="text" # ## 4. 总结 # # **曾伊言对算法的理解:** 深度学习CNN在整个无监督语义分割任务中,承担的任务是:对经典机器学习无监督语义分割的细粒度预分类结果进行处理。并在迭代中,逐步对小区块进行合并,最后得到符合人类预期的语义分割结果。 # # 但是,这个方法也有明显的**缺点**:那就是鲁棒性不强,算法受参数影响大(包括梯度下降法的参数,与机器学习的预分类算法的参数),并且算法多次随机重启的结果会有不同。
6,796
/SARIMAX-forecast.ipynb
ca3752400fb60529dacf1a963e7f57c362e58ad7
[]
no_license
luozhongbin2017/capstone1
https://github.com/luozhongbin2017/capstone1
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
1,513,068
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import numpy as np import pandas as pd from sklearn.metrics import r2_score from sklearn.preprocessing import MinMaxScaler from statsmodels.tsa.stattools import adfuller, acf, pacf from statsmodels.graphics.tsaplots import plot_acf, plot_pacf from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.tsa.arima_model import ARMA import statsmodels.api as sm import matplotlib.pyplot as plt import seaborn as sns from pyramid.arima import auto_arima sns.set_style('whitegrid') # %matplotlib inline # %config InlineBackend.figure_format = 'retina' import warnings warnings.filterwarnings('ignore') # - # Extracting data for ETH at 5 min interval. df_eth = pd.read_json('https://poloniex.com/public?command=returnChartData&currencyPair=USDT_ETH&start=1438992000&end=9999999999&period=300') df_eth.head() df_eth.info() # #### Only looking at the closing price df = pd.concat([df_eth['date'], df_eth['close']], axis=1) df.head() # #### Setting the datetime as the index. df.set_index('date', inplace=True) df.head() df.info() # ### Plotting the rolling mean for day, week, month. # + df_eth_dailyclosemean = df.close.resample('D').mean() rolmean_1 = pd.rolling_mean(df_eth_dailyclosemean, window = 1) rolmean_7 = pd.rolling_mean(df_eth_dailyclosemean, window = 7) rolmean_30 = pd.rolling_mean(df_eth_dailyclosemean, window = 30) fig = plt.figure(figsize=(15, 8)) mean = plt.plot(rolmean_1, color='red', label='Rolling Mean (1 day)') mean = plt.plot(rolmean_7, color='blue', label='Rolling Mean(1 week)') mean = plt.plot(rolmean_30, color='green', label='Rolling Mean(1 month)') plt.legend(loc='best') plt.title('Rolling Mean for 1 Day , 1-Week and 1-Month') plt.ylabel('USD$', fontsize=18) plt.xlabel('YEAR-MONTH', fontsize=18) plt.show() # - # ### Resample to weekly df_weeklyclose = df[['close']].resample('W').mean() df_weeklyclose.plot(kind='line', lw='0.5', figsize=(15,8), title= 'Ethereum Weekly Closing Mean\n', fontsize=15) plt.ylabel('USD$', fontsize=18) plt.xlabel('YEAR-MONTH', fontsize=18) plt.show() # + # Additive decomposition for non-log values. from pylab import rcParams rcParams['figure.figsize'] = 11, 9 rcParams['lines.linewidth'] = 0.5 decomposition = seasonal_decompose(df_weeklyclose, model='additive') fig = decomposition.plot() plt.show() # - # Multiplicative decomposition for non-log values. decomposition = seasonal_decompose(df_weeklyclose, model='multiplicative') fig = decomposition.plot() plt.show() df_weeklyclose.head() df_weeklyclose.tail(36) df_weeklyclose.plot(figsize=(12,6)) # ### Using Log Transformation df_log = np.log(df_weeklyclose['close']) df_log.plot(figsize=(12,6)) # Rolling Standard Deviation - orginal versus log transformed fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(12,6)) _ = df_weeklyclose['close'].rolling(4).std().plot(ax=ax1) _ = df_log.rolling(4).std().plot(ax=ax2) # ### Using Box-Cox Transformation # + from scipy import special def box_cox_rolling_coeffvar(box_cox_param, endog, freq): """helper to find Box-Cox transformation with constant standard deviation returns RLM results instance """ roll_air = special.boxcox(endog, box_cox_param).rolling(window=freq) y = roll_air.std() m = roll_air.mean() x = sm.add_constant(m) res_rlm = sm.RLM(y, x, missing='drop').fit() return res_rlm endog = df_weeklyclose['close'] freq = 4 tt = [(lam, box_cox_rolling_coeffvar(lam, endog, freq).pvalues[1]) for lam in np.linspace(-1, 1, 21)] tt = np.asarray(tt) print(tt[tt[:,1].argmax()]) # - # Rolling Standard Deviation - Box-Cox transformed ax = special.boxcox(df_weeklyclose['close'], 0).rolling(window=4).std().plot(figsize=(12,6)) weeklyclose_t = special.boxcox(df_weeklyclose['close'], 0) # One simple approach is to perform a grid search over multiple values of p,d,q using some sort of performance criteria. We can write a code to do a manual grid search or we can use the pyramid-arima library. The pyramid-arima library for Python allows us to quickly perform this grid search and even creates a model object that you can fit to the training data. # The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. # Auto Arima stepwise_model = auto_arima(weeklyclose_t, start_p=1, start_q=1, max_p=3, max_q=3, m=12, start_P=0, seasonal=True, d=1, D=1, trace=True, error_action='ignore', suppress_warnings=True, stepwise=True) print(stepwise_model.aic()) # The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. # The AIC value will allow us to compare how well a model fits the data and takes into account the complexity of a model, so models that have a better fit while using fewer features will receive a better (lower) AIC score than similar models that utilize more features. https://medium.com/@josemarcialportilla/using-python-and-auto-arima-to-forecast-seasonal-time-series-90877adff03c # The actual value of the AIC (or AICc), and whether it is positive or negative, means nothing. If you simply changed the units the data are expressed in, the AIC (and AICc) would change dramatically. But the difference between the AIC of the two alternative models would not change at all. # Bottom line: Ignore whether it is positive or negative. Just choose the smallest which is the most optimal for the case. # #### The output of our code suggests that using the model SARIMAX(2, 1, 0)x(0, 1, 1, 12) which yields the lowest AIC value of -97.012. We should therefore consider this to be optimal option out of all the models we have generated. res_s = sm.tsa.SARIMAX(weeklyclose_t, order=(2, 1, 0), seasonal_order=(0, 1, 2, 12), enforce_stationarity=True, enforce_invertibility=True ).fit() res_s.summary() # ### Interpreting the model table res_s.summary().tables[0] res_s.summary().tables[1] # The coef column shows the weight (i.e. importance) of each feature and how each one impacts the time series. # The P>|z| column informs us of the significance of each feature weight. We only retain features for p < 0.05. res_s.summary().tables[2] # The Jarque-Bera test is a goodness-of-fit test of whether the data has the skewness and kurtosis of a normal distribution. The normal distribution has a skew of 0.22 and a kurtosis of 3.67. # ### Plotting diagnostic tools # When fitting seasonal ARIMA models (and any other models for that matter), it is important to run model diagnostics to ensure that none of the assumptions made by the model have been violated. The plot_diagnostics object allows us to quickly generate model diagnostics and investigate for any unusual behavior. # We need to ensure that the residuals of our model are randomly distributed with zero-mean and not serially correlate, i. e. we’d like the remaining information to be white noise. If the fitted seasonal ARIMA model does not satisfy these properties, it is a good indication that it can be further improved. # The residual plot of the fitted model in the upper left corner appears do be white noise as it does not display obvious seasonality or trend behaviour. The histogram plot in the upper right corner pair with the kernel density estimation (red line) indicates that the time series is almost normally distributed. This is compared to the density of the standard normal distribution (green line). The correlogram (autocorrelation plot) confirms this resuts, since the time series residuals show low correlations with lagged residuals. # Although the fit so far appears to be fine, a better fit could be achieved with a more complex model. res_s.plot_diagnostics(figsize=(18, 12)) plt.show() # + # in-sample-prediction and confidence bounds pred = res_s.get_prediction(start=pd.to_datetime('2018-01-07'), end=pd.to_datetime('2018-09-02'), dynamic=True, full_results=True) pred_ci = pred.conf_int() plt.figure(figsize=(18, 8)) # plot in-sample-prediction ax = weeklyclose_t['2015-08-09':].plot(label='Observed',color='#006699'); pred.predicted_mean.plot(ax=ax, label='One-step Ahead Prediction', alpha=.7, color='#ff0066'); # draw confidence bound (gray) ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='#ff0066', alpha=.25); # style the plot ax.fill_betweenx(ax.get_ylim(), pd.to_datetime('2018-01-07'), weeklyclose_t.index[-1], alpha=.15, zorder=-1, color='grey'); ax.set_xlabel('Date') ax.set_ylabel('Price') plt.legend(loc='upper left') plt.show() # + import math y_hat = pred.predicted_mean y_true = weeklyclose_t['2018-01-07':] # compute the mean square error mse = ((y_hat - y_true) ** 2).mean() print('Prediction quality: {:.2f} MSE ({:.2f} RMSE)'.format(mse, math.sqrt(mse))) # + plt.plot(y_true, label='Observed', color='#006699') plt.plot(y_hat, label='In-Sample Forecast', color='#ff0066') ax.set_xlabel('Date') ax.set_ylabel('Price') plt.legend(loc='upper right'); plt.show() # - # The model shows a predicted steady climb but in actual fact, it went down. # Out of sample forecast pred = res_s.get_prediction(start=pd.to_datetime('2018-08-19'), end=pd.to_datetime('2018-12-19')) pred_ci = pred.conf_int() pred_ci.head() endog = weeklyclose_t ax = endog.plot(label='observed') ax.figure.set_size_inches(12, 8) pred.predicted_mean.plot(ax=ax, label='Forecast', lw=3, alpha=.7, color='r') ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.2) ax.set_title('Forecast in transformed scale', fontsize=16) _ = plt.legend() #Transform forecast to original scale pred_ci_orig = special.inv_boxcox(pred.conf_int(), 0) forecast = special.inv_boxcox(pred.predicted_mean, 0) pred.predicted_mean forecast.head() ax = df_weeklyclose['close'].plot(label='observed') ax.figure.set_size_inches(12, 8) forecast.plot(ax=ax, label='forecast', lw=3, alpha=.7, color='r') ax.fill_between(pred_ci_orig.index, pred_ci_orig.iloc[:, 0], pred_ci_orig.iloc[:, 1], color='k', alpha=.15) ax.set_title('Forecast in original units (Mean)', fontsize=16) plt.legend(); # As we forecast further out into the future, it is natural for the model to become less confident in its values. This is reflected by the confidence intervals generated by our model, which grow larger as we move further out into the future. # # We also noticed the predicted values regularly mirror the previous values.
11,200
/examples/Pendulum Control.ipynb
b9938d28293557eefe3d61f377ca3a6f897c10f8
[ "MIT" ]
permissive
drgona/mpc.pytorch
https://github.com/drgona/mpc.pytorch
0
0
MIT
2019-09-17T20:27:43
2019-09-16T05:03:47
null
Jupyter Notebook
false
false
.py
230,755
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Nth Fibonacci number<br> # Nth term of fibonacci series F(n) is calculated using following formula -<br> # F(n) = F(n-1) + F(n-2), <br> # Where, F(1) = F(2) = 1<br> # Provided N you have to find out the Nth Fibonacci Number.<br> # <br> # Input Format :<br> # Integer n<br> # Output Format :<br> # Nth Fibonacci term i.e. F(n)<br> # <br> # Constraints:<br> # 1 <= n <= 25<br> # <br> # Sample Input 1:<br> # 4<br> # Sample Output 2:<br> # 3 <br> # <br> # Sample Input 1:<br> # 6<br> # Sample Output 2:<br> # 8 # + def Fibonacci(n): if n == 1: return 1 elif n == 2: return 1 else: return Fibonacci(n - 1) + Fibonacci(n - 2) n=int(input()) print(Fibonacci(n)) ] = pd.to_datetime(df['Data de lançamento']) # Vamos tentar identificar o problema da coluna 'Data de lançamento' df['Data de lançamento'].value_counts() # Exibir alguns filmes que estão com data de lançamento igual à Relançamento df[df['Data de lançamento'] == 'Relançamento'].head() # ## Decisão sobre dados incorretos/faltantes/divergentes # Criar novo DataFrame sem os relançamentos df_novo = df[df['Data de lançamento'] != 'Relançamento'].copy() # Exibe o tamanho do DataFrame (linhas, colunas) df_novo.shape # Vamos verificar os tipos das colunas df_novo.dtypes # Converter a coluna 'Data de lançamento' de object para data df_novo['Data de lançamento'] = pd.to_datetime(df_novo['Data de lançamento']) # Vamos verificar os tipos das colunas df_novo.dtypes # Quais os anos tiverem mais filmes lançados? df_novo['Data de lançamento'].dt.year.value_counts() # Plotar gráfico com os lançamentos por ano df_novo['Data de lançamento'].dt.year.value_counts().plot.bar() # Qual é o filme que teve a maior bilheteria? df_novo[df_novo['Renda (R$) no ano de exibição'] == df_novo['Renda (R$) no ano de exibição'].max()] # Qual é o filme que teve a menor bilheteria? df_novo[df_novo['Renda (R$) no ano de exibição'] == df_novo['Renda (R$) no ano de exibição'].min()] # Quantos filmes brasileiros e estrangeiros? df_novo['Nacionalidade da obra'].value_counts() )(x, QuadCost(Q, p), dx) next_action = nominal_actions[0] u_init = torch.cat((nominal_actions[1:], torch.zeros(1, n_batch, dx.n_ctrl)), dim=0) u_init[-2] = u_init[-3] x = dx(x, next_action) n_row, n_col = 4, 4 fig, axs = plt.subplots(n_row, n_col, figsize=(3*n_col,3*n_row)) axs = axs.reshape(-1) for i in range(n_batch): dx.get_frame(x[i], ax=axs[i]) axs[i].get_xaxis().set_visible(False) axs[i].get_yaxis().set_visible(False) fig.tight_layout() fig.savefig(os.path.join(t_dir, '{:03d}.png'.format(t))) plt.close(fig) # + vid_fname = 'pendulum-{}.mp4'.format(mode) if os.path.exists(vid_fname): os.remove(vid_fname) cmd = 'ffmpeg -r 16 -f image2 -i {}/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format( t_dir, vid_fname ) os.system(cmd) print('Saving video to: {}'.format(vid_fname)) # - video = io.open(vid_fname, 'r+b').read() encoded = base64.b64encode(video) HTML(data='''<video alt="test" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video>'''.format(encoded.decode('ascii')))
3,487
/Transform/16_featureengineering_exercise.ipynb
c663fd27478a293afdb2d5a817c76078c5f4f0c5
[]
no_license
RuchitDoshi/ETL_pipeline
https://github.com/RuchitDoshi/ETL_pipeline
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
45,468
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={} colab_type="code" id="cS6a8-FZFPRY" import seaborn as sns import numpy as np import pandas as pd import matplotlib.pyplot as plt # + colab={} colab_type="code" id="DeKj_I1eFg54" data=sns.load_dataset('titanic') # + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" executionInfo={"elapsed": 920, "status": "ok", "timestamp": 1585062589680, "user": {"displayName": "CampusX", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggb4dBIfEwacwZIPh_xgAaV6zeGrsD3uBmGBK-Y=s64", "userId": "17274569631252575216"}, "user_tz": -330} id="ttL3B-FkFqPl" outputId="084469a7-63d1-4e1b-9751-8ae224a47b8b" plt.style.use('fivethirtyeight') data.head() # + colab={} colab_type="code" id="IMAPDPf1F7Cz" # Problem 1 : Plot a Heatmap on the correlation between multiple columns # - plt.figure(figsize=(10,7)) sns.heatmap(data.corr(), annot=True, cbar=True, cmap="summer").set_ylim(8.0,0) plt.title("Heatmap of the correlation between all the columns of Titanic dataset\n") plt.show() # + colab={} colab_type="code" id="58iCW6cQGA7A" # Problem 2 : Plot the distribution of age column using a Boxplot on the basis of sex column # - sns.catplot(x='sex',y='age',kind='box',height=6,aspect=1.5,data=data) # + colab={} colab_type="code" id="mmyWsjQlGbjG" # Problem 3 : Plot the distribution of age using a ViolinPlot on the basis of both Pclass and Sex column # - sns.catplot(y='age',x='pclass',hue='sex',kind='violin',aspect=1.6,height=6,data=data,split=True) # + colab={} colab_type="code" id="tLmYdldJGn8C" # Problem 4 : Plot the count of Survivors and Non-Survivors # - sns.catplot(x='alive',kind='count',height=6,data=data) # + colab={} colab_type="code" id="lunLsoLLGzAe" # Problem 5: Plot the count of survivors as well as non-survivors on the basis of Pclass # - sns.catplot(x='alive',hue='pclass',kind='count',height=6,aspect=1.6,data=data) # + colab={} colab_type="code" id="c7j0prL7G9_e" # Problem 6: Plot a scatterplot between age and fare column on the basis of both Pclass and Sex column # + colab={} colab_type="code" id="F7C_aaxpHQNZ" sns.relplot(y='fare',x='age',hue='sex',style='pclass',size='pclass',height=6,aspect=1.6,sizes=(50,100),data=data,) # - 'East Asia & Pacific (excluding high income)', 'East Asia & Pacific (IDA & IBRD countries)', 'Euro area', 'Early-demographic dividend', 'Lower middle income', 'Latin America & Caribbean', 'Latin America & the Caribbean (IDA & IBRD countries)', 'Latin America & Caribbean (excluding high income)', 'Europe & Central Asia (IDA & IBRD countries)', 'Middle East & North Africa', 'Europe & Central Asia (excluding high income)', 'South Asia (IDA & IBRD)', 'South Asia', 'Arab World', 'IDA total', 'Sub-Saharan Africa', 'Sub-Saharan Africa (IDA & IBRD countries)', 'Sub-Saharan Africa (excluding high income)', 'Middle East & North Africa (excluding high income)', 'Middle East & North Africa (IDA & IBRD countries)', 'Central Europe and the Baltics', 'Pre-demographic dividend', 'IDA only', 'Least developed countries: UN classification', 'IDA blend', 'Fragile and conflict affected situations', 'Heavily indebted poor countries (HIPC)', 'Low income', 'Small states', 'Other small states', 'Not classified', 'Caribbean small states', 'Pacific island small states'] # remove non countries from the data df_2016 = df_2016[~df_2016['Country Name'].isin(non_countries)] df_2016.reset_index(inplace=True, drop=True) # - # # Exercise 1 # # Create a new feature called gdppercapita in a new column. This feature should be the gdp value divided by the population. # + # TODO: create a new feature called gdppercapita, # which is the gdp value divided by the population value for each country df_2016['gdppercapita'] = df_2016['gdp']/df_2016['population'] # - # # Exercise 2 (Challenge) # # This next exercise is more challenging and assumes you know how to use the pandas apply() method as well as lambda functions. # # Write code that creates multiples of a feature. For example, if you take the 'gdp' column and an integer like 3, you want to append a new column with the square of gdp (gdp^2) and another column with the cube of gdp (gdp^3). # # Follow the TODOs below. These functions build on each other in the following way: # # create_multiples(b, k) has two inputs. The first input, b, is a floating point number. The second number, k, is an integer. The output is a list of multiples of b. For example create_multiples(3, 4) would return this list: $[3^2, 3^3, 3^4]$ or in other words $[9, 27, 81]$. # # Then the column_name_generator(colname, k) function outputs a list of column names. For example, column_name_generator('gdp', 4) would output a list of strings `['gdp2', 'gdp3', 'gdp4']`. # And finally, concatenate_features(df, column, num_columns) uses the two previous functions to create the new columns and then append these new columns to the original data frame. # + # TODO: Fill out the create_multiples function. # The create_multiples function has two inputs. A floating point number and an integer. # The output is a list of multiples of the input b starting from the square of b and ending at b^k. def create_multiples(b, k): # TODO: use a for loop to make a list of multiples of b: ie b^2, b^3, b^4, etc... until b^k # You do not need to include b^0, which would be 1. You also do not need b^1 because that feature # is already in data frame. new_features = [] for i in range(2,k+1,1): new_feature.append(b**i) return new_features # TODO: Fill out the column_name_generator function. # The function has two inputs: a string representing a column name and an integer k. # The 'k' variable is the same as the create_multiples function. # The output should be a list of column names. # For example if the inputs are ('gdp', 4) then the output is a list of strings ['gdp2', 'gdp3', gdp4'] def column_name_generator(colname, k): col_names = [] for i in range(2,k+1,1): col_names.append(colname+str(i)) return col_names # TODO: Fill out the concatenate_features function. # The function has three inputs. A dataframe, a column name represented by a string, and an integer representing # the maximum power to create when engineering features. # If the input is (df_2016, 'gdp', 3), then the output will be the df_2016 dataframe with two new columns # One new column will be 'gdp2' ie gdp^2, and then other column will be 'gdp3' ie gdp^3. # HINT: There may be more than one way to do this. # The TODOs in this section point you towards one way that works def concatenate_features(df, column, num_columns): # TODO: Use the pandas apply() method to create the new features. Inside the apply method, you # can use a lambda function with the create_mtuliples function # HINT: df[column].apply(lambda ....) new_features = None # TODO: Create a dataframe with a separate column for each of the new features # Use the column_name_generator() function to create the column names # HINT: In the pd.DataFrame() method, you can specify column names inputting a list in the columns option # HINT: Using new_features.tolist() might be helpful new_features_df = None # TODO: concatenate the original date frame in df with the new_features_df dataframe # return this concatenated dataframe col_names=column_name_generator(column,num_columns) for i in range(len(col_names)): df[col_names[i]]=(df[column])**(i+2) return df # - # # Solution # # Run the code cell below. If your code is correct, you should get a dataframe with 8 columns. Here are the first two rows of what your results should look like. # # | Country Name | year | gdp | population | gdppercapita | gdp2 | gdp3 | gdp4 | # |--------------|------|--------------|------------|--------------|--------------|--------------|--------------| # | Aruba | 2016 | 2.584464e+09 | 104822.0 | 24655.737223 | 6.679453e+18 | 1.726280e+28 | 4.461509e+37 | # | Afghanistan | 2016 | 1.946902e+10 | 34656032.0 | 561.778746 | 3.790428e+20 | 7.379593e+30 | 1.436735e+41 | # # # # # There is a solution in the 16_featureengineering_exercise folder if you go to File->Open. concatenate_features(df_2016, 'gdp', 4)
8,594
/tf_basic/07_tf_matplotlib.ipynb
f758d964dd7bcddb6f027fa9863d3cda8c58d203
[]
no_license
Raistlind/dlrun
https://github.com/Raistlind/dlrun
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
12,931
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7CgpmdW5jdGlvbiBfdXBsb2FkRmlsZXMoaW5wdXRJZCwgb3V0cHV0SWQpIHsKICBjb25zdCBzdGVwcyA9IHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCk7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICAvLyBDYWNoZSBzdGVwcyBvbiB0aGUgb3V0cHV0RWxlbWVudCB0byBtYWtlIGl0IGF2YWlsYWJsZSBmb3IgdGhlIG5leHQgY2FsbAogIC8vIHRvIHVwbG9hZEZpbGVzQ29udGludWUgZnJvbSBQeXRob24uCiAgb3V0cHV0RWxlbWVudC5zdGVwcyA9IHN0ZXBzOwoKICByZXR1cm4gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpOwp9CgovLyBUaGlzIGlzIHJvdWdobHkgYW4gYXN5bmMgZ2VuZXJhdG9yIChub3Qgc3VwcG9ydGVkIGluIHRoZSBicm93c2VyIHlldCksCi8vIHdoZXJlIHRoZXJlIGFyZSBtdWx0aXBsZSBhc3luY2hyb25vdXMgc3RlcHMgYW5kIHRoZSBQeXRob24gc2lkZSBpcyBnb2luZwovLyB0byBwb2xsIGZvciBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcC4KLy8gVGhpcyB1c2VzIGEgUHJvbWlzZSB0byBibG9jayB0aGUgcHl0aG9uIHNpZGUgb24gY29tcGxldGlvbiBvZiBlYWNoIHN0ZXAsCi8vIHRoZW4gcGFzc2VzIHRoZSByZXN1bHQgb2YgdGhlIHByZXZpb3VzIHN0ZXAgYXMgdGhlIGlucHV0IHRvIHRoZSBuZXh0IHN0ZXAuCmZ1bmN0aW9uIF91cGxvYWRGaWxlc0NvbnRpbnVlKG91dHB1dElkKSB7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICBjb25zdCBzdGVwcyA9IG91dHB1dEVsZW1lbnQuc3RlcHM7CgogIGNvbnN0IG5leHQgPSBzdGVwcy5uZXh0KG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSk7CiAgcmV0dXJuIFByb21pc2UucmVzb2x2ZShuZXh0LnZhbHVlLnByb21pc2UpLnRoZW4oKHZhbHVlKSA9PiB7CiAgICAvLyBDYWNoZSB0aGUgbGFzdCBwcm9taXNlIHZhbHVlIHRvIG1ha2UgaXQgYXZhaWxhYmxlIHRvIHRoZSBuZXh0CiAgICAvLyBzdGVwIG9mIHRoZSBnZW5lcmF0b3IuCiAgICBvdXRwdXRFbGVtZW50Lmxhc3RQcm9taXNlVmFsdWUgPSB2YWx1ZTsKICAgIHJldHVybiBuZXh0LnZhbHVlLnJlc3BvbnNlOwogIH0pOwp9CgovKioKICogR2VuZXJhdG9yIGZ1bmN0aW9uIHdoaWNoIGlzIGNhbGxlZCBiZXR3ZWVuIGVhY2ggYXN5bmMgc3RlcCBvZiB0aGUgdXBsb2FkCiAqIHByb2Nlc3MuCiAqIEBwYXJhbSB7c3RyaW5nfSBpbnB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIGlucHV0IGZpbGUgcGlja2VyIGVsZW1lbnQuCiAqIEBwYXJhbSB7c3RyaW5nfSBvdXRwdXRJZCBFbGVtZW50IElEIG9mIHRoZSBvdXRwdXQgZGlzcGxheS4KICogQHJldHVybiB7IUl0ZXJhYmxlPCFPYmplY3Q+fSBJdGVyYWJsZSBvZiBuZXh0IHN0ZXBzLgogKi8KZnVuY3Rpb24qIHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IGlucHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKGlucHV0SWQpOwogIGlucHV0RWxlbWVudC5kaXNhYmxlZCA9IGZhbHNlOwoKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIG91dHB1dEVsZW1lbnQuaW5uZXJIVE1MID0gJyc7CgogIGNvbnN0IHBpY2tlZFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgaW5wdXRFbGVtZW50LmFkZEV2ZW50TGlzdGVuZXIoJ2NoYW5nZScsIChlKSA9PiB7CiAgICAgIHJlc29sdmUoZS50YXJnZXQuZmlsZXMpOwogICAgfSk7CiAgfSk7CgogIGNvbnN0IGNhbmNlbCA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2J1dHRvbicpOwogIGlucHV0RWxlbWVudC5wYXJlbnRFbGVtZW50LmFwcGVuZENoaWxkKGNhbmNlbCk7CiAgY2FuY2VsLnRleHRDb250ZW50ID0gJ0NhbmNlbCB1cGxvYWQnOwogIGNvbnN0IGNhbmNlbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgZG8gewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwoKICAgICAgbGV0IHBlcmNlbnREb25lID0gZmlsZURhdGEuYnl0ZUxlbmd0aCA9PT0gMCA/CiAgICAgICAgICAxMDAgOgogICAgICAgICAgTWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCk7CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPSBgJHtwZXJjZW50RG9uZX0lIGRvbmVgOwoKICAgIH0gd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCk7CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 79} id="M3lqkb9f60vU" outputId="2fb3f867-a660-4053-e4ad-d6045fdbd79d" from google.colab import files files.upload() # kaggle.jsonをアップロード # !mkdir -p ~/.kaggle # !mv kaggle.json ~/.kaggle/ # !chmod 600 /root/.kaggle/kaggle.json # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="L4SAPNgD63Ti" outputId="46899f94-ed57-4edf-ccc2-69a074d75d9c" # プログラム2.12 import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt # %matplotlib inline # データの準備 def prepare(): # 世界の気温を記録したデータをダウンロードして読み込む # !kaggle datasets download -d schedutron/global-temperatures # !unzip global-temperatures.zip df = pd.read_csv('GlobalTemperatures.csv') # 平均気温をデータフレームに格納する df = pd.DataFrame(df, columns=['dt', 'LandAverageTemperature']) print(df) return df # 欠損値を含む区間および欠損値が最も多い区間を確認 def get_missing_range(df): i = 0 n = 20 rlist = [] cmax = 0 ci = -1 while i+n < len(df): mc = df[i:i+n].isnull().sum(axis=1) c = np.sum(mc.values) if c > 0: rlist.append(range(i,i+n)) if cmax < c: cmax = c ci = len(rlist)-1 i += n return rlist, ci # 欠損値が最も多い区間について図示 def check_missing_values(df): plt.figure(figsize=(7,5)) df['dt'] = pd.to_datetime(df['dt']) df.index = pd.DatetimeIndex(df['dt'], name='Date') df.drop('dt', axis=1, inplace=True) # 欠損値のある範囲を確認 rlist, ci = get_missing_range(df) print('-->', list(rlist[ci])) l = list(rlist[ci]) print('Include missing value', df[l[0]:l[-1]]) plt.plot(df[l[0]:l[-1]]) plt.xticks(rotation=90) plt.title('Including Missing Values') plt.xlabel('Datetime') plt.ylabel('Temperature') plt.savefig('including_missing_value.png', bbox_inches='tight') plt.show() return rlist[ci] # 欠損値の補間(interpolateメソッドを利用) def interpolate(df, itype, direction, a_range): print('{} interpolate'.format(itype)) if itype == None: df_i = df.interpolate(limit=1, limit_direction=direction)['LandAverage Temperature'] else: df_i = df.interpolate(method=itype, order=1)['LandAverageTemperature'] print(df_i[list(a_range)]) plt.figure(figsize=(7,5)) plt.plot(df_i[list(a_range)]) plt.title('{} interpolation'.format(itype)) plt.xticks(rotation=90) plt.xlabel('Datetime') plt.ylabel('Temperature') plt.savefig('{}_interpolation.png'.format(itype), bbox_inches='tight') plt.show() def main(): df = prepare() a_range = check_missing_values(df) direction = 'forward' for itype in ['time', 'values', 'linear', 'index', 'spline', 'nearest']: interpolate(df, itype, direction, a_range) if __name__ == '__main__': main()
10,302
/4-machine-learning-supervisionado-inter/Aula 11 - Séries de Tempo/time_series.ipynb
0f1c1d5792e7db40080bb6be0ed04c162dc6830a
[]
no_license
jeremiedron/datascience_course
https://github.com/jeremiedron/datascience_course
2
0
null
2019-03-23T20:14:01
2019-03-23T20:13:56
null
Jupyter Notebook
false
false
.py
1,490,915
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Séries de tempo # # ( https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html ) # # # Time Series é considerado uma das habilidades menos conhecidas no espaço de análise (Mesmo eu tinha pouca pista sobre isso alguns dias atrás). Mas como você sabe que o nosso primeiro Hackathon Mini é baseado nele, eu me preparo para uma jornada para aprender os passos básicos para resolver um problema de Séries Temporais e aqui estou compartilhando o mesmo com você. Estes definitivamente ajudarão você a obter um modelo decente em nosso hackathon hoje. # # # 1 - O que torna a série de tempo especial? # 2 - Carregando e manuseando séries temporais em pandas # 3 - Como verificar o estacionarismo de uma série temporal? # 4 - Como fazer uma série temporal estacionária? # 5 - Previsão de uma série temporal # # # ### 1. O que torna a série temporal especial? # Como o nome sugere, TS é uma coleção de pontos de dados coletados em intervalos de tempo constantes . Estes são analisados para determinar a tendência de longo prazo, de modo a prever o futuro ou realizar alguma outra forma de análise. Mas o que faz um TS diferente de dizer um problema de regressão regular? Existem duas coisas: # # * É dependente do tempo . Portanto, a suposição básica de um modelo de regressão linear que as observações são independentes não vale neste caso. # * Juntamente com uma tendência crescente ou decrescente, a maioria dos TS tem alguma forma de tendências de sazonalidade , ou seja, variações específicas de um determinado período de tempo. Por exemplo, se você ver as vendas de uma jaqueta de lã ao longo do tempo, você invariavelmente encontrará maiores vendas nas temporadas de inverno. # # # Por causa das propriedades inerentes de um TS, existem várias etapas envolvidas na análise. Estes são discutidos em detalhes abaixo. Vamos começar carregando um objeto TS no Python. Nós estaremos usando o popular conjunto de dados AirPassengers. # # ### 2. Carregando e Manipulando Séries Temporais em Pandas # # O Pandas possui bibliotecas dedicadas para manipular objetos TS, particularmente a classe datatime64 [ns] que armazena informações de tempo e nos permite executar algumas operações realmente rápidas. Vamos começar ativando as bibliotecas necessárias: # + import pandas as pd import numpy as np import matplotlib.pylab as plt # %matplotlib inline plt.rcParams ['figure.figsize'] = 15, 6 data = pd.read_csv('./data/AirPassengers.csv') data.head() # - # Os dados contêm um mês e um número específicos de passageiros viajando naquele mês. Mas isso ainda não é lido como um objeto TS, pois os tipos de dados são 'objeto' e 'int'. Para ler os dados como uma série temporal, temos que passar argumentos especiais para o comando read_csv: # + dateparse = lambda dates: pd.datetime.strptime(dates, '%Y-%m') data = pd.read_csv('./data/AirPassengers.csv', parse_dates=['Month'], index_col='Month',date_parser=dateparse) data.head() # - # Vamos entender os argumentos um por um: # # - 1) **parse_dates** : especifica a coluna que contém as informações de data e hora. Como dissemos acima, o nome da coluna é 'Mês'. # # - 2) **index_col**: Uma idéia-chave por trás do uso de Pandas para dados TS é que o índice deve ser a variável que descreve as informações de data e hora. Então este argumento diz aos pandas para usarem a coluna 'Mês' como índice. # # - 3) **date_parser**: especifica uma função que converte uma string de entrada em uma variável datetime. Por padrão, o Pandas lê dados no formato 'AAAA-MM-DD HH: MM: SS'. Se os dados não estiverem nesse formato, o formato deverá ser definido manualmente. Algo parecido com a função de busca de dados definida aqui pode ser usado para este propósito. # # Agora podemos ver que os dados têm objeto de tempo como índice e #Passengers como a coluna. Podemos cruzar o tipo de dados do índice com o seguinte comando: data.index # Observe o dtype = 'datetime [ns]', que confirma que é um objeto datetime. Como uma preferência pessoal, eu converteria a coluna em um objeto de série para evitar referir nomes de colunas toda vez que eu usar o TS. Por favor, sinta-se livre para usar como um dataframe é que funciona melhor para você. # # ### 3. Como verificar a estacionariedade de uma série temporal? # # Uma TS é considerada estacionária se suas propriedades estatísticas , como média, variância permanecer constante ao longo do tempo . Mas por que isso é importante? A maioria dos modelos TS trabalha no pressuposto de que o TS é estacionário. Intuitivamente, podemos afirmar que se um TS tem um comportamento particular ao longo do tempo, existe uma probabilidade muito alta de que ele siga o mesmo no futuro. Além disso, as teorias relacionadas a séries estacionárias são mais maduras e mais fáceis de implementar em comparação com séries não estacionárias. # # A estacionariedade é definida usando um critério muito estrito. No entanto, para fins práticos, podemos supor que a série seja estacionária se ela tiver propriedades estatísticas constantes ao longo do tempo, isto é. Os seguintes: # # - média constante # - variância constante # - uma autocovariância que não depende do tempo. # # Vou pular os detalhes, pois é muito claramente definido neste artigo . Vamos para as maneiras de testar a estacionariedade. Primeiro e mais importante é simples traçar os dados e analisar visualmente. Os dados podem ser plotados usando o seguinte comando: plt.plot(data) # É claramente evidente que há uma tendência geral crescente nos dados, juntamente com algumas variações sazonais. No entanto, nem sempre é possível fazer tais inferências visuais (veremos esses casos mais tarde). Então, mais formalmente, podemos verificar a estacionariedade usando o seguinte: # # - **Plotando Rolando Estatísticas**: Podemos traçar a média móvel ou variância móvel e ver se varia com o tempo. Ao mover a média / variância, quero dizer que, em qualquer instante, 't', consideraremos a média / variância do último ano, isto é, os últimos 12 meses. Mas, novamente, isso é mais uma técnica visual. # # # - **Teste Dickey-Fuller**: Este é um dos testes estatísticos para verificar a estacionariedade. Aqui a hipótese nula é que o TS é não-estacionário. Os resultados do teste compreendem uma estatística de teste e alguns valores críticos para os níveis de confiança de diferença. Se a estatística de teste for menor que o valor crítico, podemos rejeitar a hipótese nula e dizer que a série é estacionária. Consulte este artigo para detalhes. # # Esses conceitos podem não parecer muito intuitivos neste momento. Eu recomendo passar pelo artigo prequel. Se você estiver interessado em alguma estatística teórica, você pode consultar Introdução a Séries Temporais e Previsão por Brockwell e Davis . O livro é um pouco pesado para as estatísticas, mas se você tiver a habilidade de ler as entrelinhas, poderá entender os conceitos e tangenciar as estatísticas tangencialmente. # # De volta à verificação da estacionariedade, usaremos muito os gráficos estatísticos de rolagem juntamente com os resultados do teste Dickey-Fuller, de modo que defini uma função que recebe um TS como entrada e os gerou para nós. Por favor, note que eu tracei desvio padrão ao invés de variância para manter a unidade semelhante à média. from statsmodels.tsa.stattools import adfuller def test_stationarity(timeseries): #Determing rolling statistics rolmean = timeseries.rolling(12).mean() rolstd = timeseries.rolling(12).std() #Plot rolling statistics: orig = plt.plot(timeseries, color='blue',label='Original') mean = plt.plot(rolmean, color='red', label='Rolling Mean') std = plt.plot(rolstd, color='black', label = 'Rolling Std') plt.legend(loc='best') plt.title('Rolling Mean & Standard Deviation') plt.show(block=False) #Perform Dickey-Fuller test: # print('Results of Dickey-Fuller Test:') # dftest = adfuller(timeseries, autolag='AIC') # dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used']) # for key,value in dftest[4].items(): # dfoutput['Critical Value (%s)'%key] = value # print(dfoutput) # O código é bem direto. Por favor, sinta-se à vontade para discutir o código nos comentários se você enfrentar desafios ao compreendê-lo. Vamos executá-lo para nossas séries de entrada: test_stationarity(data) # ### 4. Como fazer uma série temporal estacionária? # # Embora a suposição de estacionariedade seja tomada em muitos modelos TS, quase nenhuma das séries temporais práticas é estacionária. Então, os estatísticos descobriram maneiras de tornar as séries estacionárias, o que discutiremos agora. Na verdade, é quase impossível fazer uma série perfeitamente estacionária, mas tentamos levá-la o mais perto possível. Vamos entender o que está tornando um TS não estacionário. Existem duas razões principais por trás do não estacionarista de um TS: # # - 1. Tendência - variação média ao longo do tempo. Por exemplo, neste caso, vimos que, em média, o número de passageiros estava crescendo com o tempo. # - 2. Sazonalidade - variações em intervalos de tempo específicos. Por exemplo, as pessoas podem ter a tendência de comprar carros em um determinado mês devido ao aumento de salário ou festivais. # # # O princípio subjacente é modelar ou estimar a tendência e a sazonalidade da série e removê-la da série para obter uma série estacionária. Em seguida, as técnicas de previsão estatística podem ser implementadas nesta série. A etapa final seria converter os valores previstos na escala original aplicando as restrições de tendência e sazonalidade. Nota: discutirei vários métodos. Alguns podem funcionar bem neste caso e outros não. Mas a ideia é pegar um jeito de todos os métodos e não focar apenas no problema em questão. Vamos começar trabalhando na parte da tendência. # # ### Estimando e eliminando tendências # # Um dos primeiros truques para reduzir a tendência pode ser a transformação. Por exemplo, neste caso, podemos ver claramente que há uma tendência positiva significativa. Assim, podemos aplicar transformações que penalizam valores mais altos do que valores menores. Estes podem ser um log, raiz quadrada, raiz cúbica, etc. Vamos pegar uma transformação log aqui para simplificar: # ts_log = np.log(data) plt.plot(ts_log) # Neste caso mais simples, é fácil ver uma tendência de avanço nos dados. Mas não é muito intuitivo na presença de ruído. Assim, podemos usar algumas técnicas para estimar ou modelar essa tendência e, em seguida, removê-la da série. Pode haver muitas maneiras de fazer isso e algumas das mais usadas são: # # - Agregação - tomando a média por um período de tempo como médias mensais / semanais # - Suavização - tendo médias móveis # - Ajuste polinomial - ajuste um modelo de regressão # # Suavização refere-se a estimativas contínuas, ou seja, considerando os últimos instantes. Existem várias maneiras, mas vou discutir duas delas aqui. # # ### Média móvel # # Nesta abordagem, tomamos a média de valores consecutivos de "k" dependendo da frequência das séries temporais. Aqui podemos tirar a média dos últimos 1 ano, ou seja, últimos 12 valores. Os pandas têm funções específicas definidas para determinar as estatísticas de rolagem. moving_avg = ts_log.rolling(12).mean() plt.plot(ts_log) plt.plot(moving_avg, color='red') # A linha vermelha mostra a média de rolamento. Vamos subtrair isso da série original. Observe que, como estamos tirando a média dos últimos 12 valores, a média de rolagem não está definida para os primeiros 11 valores. Isso pode ser observado como: ts_log_moving_avg_diff = ts_log - moving_avg ts_log_moving_avg_diff.head(16) # Observe os 11 primeiros sendo Nan. Vamos descartar esses valores NaN e verificar os gráficos para testar a estacionariedade. ts_log_moving_avg_diff.dropna(inplace=True) test_stationarity(ts_log_moving_avg_diff) # Isto parece uma série muito melhor. Os valores de rolagem parecem estar variando um pouco, mas não há uma tendência específica. # # No entanto, uma desvantagem nessa abordagem particular é que o período de tempo deve ser estritamente definido. Neste caso, podemos obter médias anuais, mas em situações complexas, como a previsão de um preço de ações, é difícil encontrar um número. Então, pegamos uma "média móvel ponderada", em que valores mais recentes recebem um peso maior. Pode haver muitas técnicas para atribuir pesos. # # O próximo tópico seria como **Eliminar tendência e sazonalidade** mas vamos deixa-lo de lado para evitar a parte complicada do tema e quem está confortavel e precisa evoluir no assunto, os dois subtópicos seriam: Diferenciação & Decomposição # + from statsmodels.tsa.seasonal import seasonal_decompose decomposition = seasonal_decompose(ts_log) trend = decomposition.trend seasonal = decomposition.seasonal residual = decomposition.resid plt.subplot(411) plt.plot(ts_log, label='Original') plt.legend(loc='best') plt.subplot(412) plt.plot(trend, label='Trend') plt.legend(loc='best') plt.subplot(413) plt.plot(seasonal,label='Seasonality') plt.legend(loc='best') plt.subplot(414) plt.plot(residual, label='Residuals') plt.legend(loc='best') plt.tight_layout() # - # ### Forecasting # # Familia ARIMA # # Deixe-me dar uma breve introdução ao ARIMA . Não vou entrar nos detalhes técnicos, mas você deve entender esses conceitos detalhadamente se quiser aplicá-los com mais eficiência. ARIMA significa Médias Móveis Integradas Auto-Regressivas . A previsão ARIMA para uma série temporal estacionária nada mais é que uma equação linear (como uma regressão linear). Os preditores dependem dos parâmetros (p, d, q) do modelo ARIMA: # # # Algumas palavras sobre o modelo. Letra por letra, construiremos o nome completo - ARIMA (p, d, q). # # # - **AR (p)** - modelo de autorregressão, isto é, regressão da série temporal em si. Premissa básica - os valores da série atual dependem de seus valores anteriores com algum atraso (ou várias defasagens). O atraso máximo no modelo é referido como p. Para determinar o p inicial. (em econometria analisamos um grafico chamado PACF para anlisar o p, em machine learning faremos via grid search). # # - ** MA (q)** - modelo de média móvel. Sem entrar em detalhes, modela o erro da série temporal, novamente a suposição é - o erro atual depende do anterior com algum atraso, que é referido como q. (em econometria usamos o grafico ACF). # # Vamos fazer uma pequena pausa e combinar as primeiras 4 letras: # # **AR(p) + MA(q) = ARMA(p,q)** # # O que temos aqui é o modelo de médio movimento autorregressivo! Se a série é estacionária, pode ser aproximada com essas 4 letras. Devemos continuar? # # - **I(d)**— ordem de integração. É simplesmente o número de diferenças não sazonais necessárias para tornar a série estacionária. Como a ideia de estacionariedade é razoavelmente complicada para essa introdução, vamos defini-la apenas como uma curva "bem comportada", que em séries temporais seria algo não explosivo, com variância constante e sazonalidade constante. # # **AR(p) + I(d) + MA(q) = ARIMA(p,d,q)** # # Há outros filtros como o S de sazonal e outras formas funcionais como o VAC e o VEC e até modelos que preevem volatidade (muito usado no mercado financeiro) como os GARCH. Aqui ficaremos no mais simples. # # # Uma preocupação importante aqui é como determinar o valor de 'p' e 'q'. Nós usamos dois gráficos para determinar esses números. Vamos discuti-los primeiro. # # **Função de Autocorrelação (ACF)**: É uma medida da correlação entre o TS com uma versão defasada de si mesmo. Por exemplo, no intervalo 5, o ACF compararia as séries no instante de tempo 't1' ... 't2' com as séries no instante 't1-5'… 't2-5' (t1-5 e t2 sendo pontos finais). # # # **Função de Autocorrelação Parcial (PACF)**: Mede a correlação entre a TS com uma versão defasada de si mesma, mas depois elimina as variações já explicadas pelas comparações intervenientes. Por exemplo, no lag 5, ele verificará a correlação, mas removerá os efeitos já explicados pelos lags 1 a 4. #ACF and PACF plots: from statsmodels.tsa.stattools import acf, pacf ts_log_diff = ts_log - ts_log.shift() ts_log_diff = ts_log_diff.dropna() lag_acf = acf(ts_log_diff, nlags=20) lag_pacf = pacf(ts_log_diff, nlags=20, method='ols') #Plot ACF: plt.subplot(121) plt.plot(lag_acf) plt.axhline(y=0,linestyle='--',color='gray') plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray') plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray') plt.title('Autocorrelation Function') #Plot PACF: plt.subplot(122) plt.plot(lag_pacf) plt.axhline(y=0,linestyle='--',color='gray') plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray') plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray') plt.title('Partial Autocorrelation Function') plt.tight_layout() # Neste gráfico, as duas linhas pontilhadas em ambos os lados de 0 são os intervalos de confiança. Estes podem ser usados para determinar os valores 'p' e 'q' como: # # p - O valor de retardo em que o gráfico PACF cruza o intervalo de confiança superior pela primeira vez. Se você notar de perto, neste caso p = 2. # # q - O valor de retardo no qual o gráfico ACF cruza o intervalo de confiança superior pela primeira vez. Se você notar de perto, neste caso, q = 2. # # Agora, vamos criar 3 modelos ARIMA diferentes, considerando efeitos individuais e combinados. Também vou imprimir o RSS para cada um. Por favor, note que aqui RSS é para os valores dos resíduos e não séries reais. # # Precisamos carregar o modelo ARIMA primeiro: from statsmodels.tsa.arima_model import ARIMA # MODELO AR model = ARIMA(ts_log, order=(2, 1, 0)) results_AR = model.fit(disp=-1) plt.plot(ts_log_diff) plt.plot(results_AR.fittedvalues, color='red') # + # MODELO MA model = ARIMA(ts_log, order=(0, 1, 2)) results_MA = model.fit(disp=-1) plt.plot(ts_log_diff) plt.plot(results_MA.fittedvalues, color='red') # + # Combinando AR + MA model = ARIMA(ts_log, order=(2, 1, 2)) results_ARIMA = model.fit(disp=-1) plt.plot(ts_log_diff) plt.plot(results_ARIMA.fittedvalues, color='red') # - ### Sazonalidade import statsmodels.api as sm mod = sm.tsa.statespace.SARIMAX(ts_log, order=(2, 1, 2), seasonal_order=(1, 1, 0, 12), enforce_stationarity=False, enforce_invertibility=False) results = mod.fit() print(results.summary().tables[1]) plt.style.use('fivethirtyeight') results.plot_diagnostics(figsize=(16, 8)) plt.show() # ## Fazendo testes # # # Exemplo do crossvalidation com timeseries # <img src='./img/crossvalidation.png'> pred = results.get_prediction(start=pd.to_datetime('1955-01-01'), dynamic=False) pred_ci = pred.conf_int() ax = ts_log['1949':].plot(label='observed') pred.predicted_mean.plot(ax=ax, label='One-step ahead Forecast', alpha=.7, figsize=(14, 7)) ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.2) ax.set_xlabel('Date') ax.set_ylabel('Passengers') plt.legend() plt.show() pred_uc = results.get_forecast(steps=100) pred_ci = pred_uc.conf_int() ax = ts_log.plot(label='observed', figsize=(14, 7)) pred_uc.predicted_mean.plot(ax=ax, label='Forecast') ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.25) ax.set_xlabel('Date') ax.set_ylabel('Passengers') plt.legend() plt.show() # ### Parte 2 - A # Usando fbprophet # Série de tempo # # Venda de móveis vs material de escritório # # De acordo com nossos dados, houve um número muito maior de vendas de materiais de escritório do que de móveis ao longo dos anos. # # Vamos comparar as vendas de duas categorias no mesmo período de tempo. Isso significa combinar dois quadros de dados em um e plotar as séries temporais dessas duas categorias em um único gráfico. # df = pd.read_excel('./data/Sample - Superstore.xls') df.head(2) furniture = df.loc[df['Category'] == 'Furniture'] office = df.loc[df['Category'] == 'Office Supplies'] furniture.shape, office.shape cols = ['Row ID', 'Order ID', 'Ship Date', 'Ship Mode', 'Customer ID', 'Customer Name', 'Segment', 'Country', 'City', 'State', 'Postal Code', 'Region', 'Product ID', 'Category', 'Sub-Category', 'Product Name', 'Quantity', 'Discount', 'Profit'] furniture.drop(cols, axis=1, inplace=True) office.drop(cols, axis=1, inplace=True) furniture = furniture.sort_values('Order Date') office = office.sort_values('Order Date') furniture = furniture.groupby('Order Date')['Sales'].sum().reset_index() office = office.groupby('Order Date')['Sales'].sum().reset_index() furniture = furniture.set_index('Order Date') office = office.set_index('Order Date') y_furniture = furniture['Sales'].resample('MS').mean() y_office = office['Sales'].resample('MS').mean() furniture = pd.DataFrame({'Order Date':y_furniture.index, 'Sales':y_furniture.values}) office = pd.DataFrame({'Order Date': y_office.index, 'Sales': y_office.values}) store = furniture.merge(office, how='inner', on='Order Date') store.rename(columns={'Sales_x': 'furniture_sales', 'Sales_y': 'office_sales'}, inplace=True) store.head() plt.figure(figsize=(20, 8)) plt.plot(store['Order Date'], store['furniture_sales'], 'b-', label = 'furniture') plt.plot(store['Order Date'], store['office_sales'], 'r-', label = 'office supplies') plt.xlabel('Date'); plt.ylabel('Sales'); plt.title('Sales of Furniture and Office Supplies') plt.legend(); # Observamos que as vendas de móveis e materiais de escritório compartilhavam um padrão sazonal similar. Início do ano é a entressafra para ambas as categorias. Parece que o horário de verão também é tranquilo para o material de escritório. Além disso, a média diária de vendas de móveis é maior do que a dos materiais de escritório na maioria dos meses. É compreensível, já que o valor do mobiliário deve ser muito maior do que o valor do material de escritório. Ocasionalmente, o material de escritório passava a mobília em média de vendas diárias. Vamos descobrir quando foi a primeira vez que as vendas de material de escritório ultrapassaram as vendas de móveis. # first_date = store.loc[np.min(list(np.where(store['office_sales'] > store['furniture_sales'])[0])), 'Order Date'] print("Office supplies first time produced higher sales than furniture is {}.".format(first_date.date())) # # Modelando com facebook prophet # # Lançada pelo Facebook em 2017, a ferramenta de previsão Profeta foi projetada para analisar séries temporais que exibem padrões em diferentes escalas de tempo, como anual, semanal e diária. Ele também possui recursos avançados para modelar os efeitos de feriados em uma série de tempo e implementar pontos de mudança personalizados. Portanto, estamos usando o Prophet para colocar um modelo em funcionamento. # # + from fbprophet import Prophet furniture = furniture.rename(columns={'Order Date': 'ds', 'Sales': 'y'}) furniture_model = Prophet(interval_width=0.95) furniture_model.fit(furniture) office = office.rename(columns={'Order Date': 'ds', 'Sales': 'y'}) office_model = Prophet(interval_width=0.95) office_model.fit(office) furniture_forecast = furniture_model.make_future_dataframe(periods=36, freq='MS') furniture_forecast = furniture_model.predict(furniture_forecast) office_forecast = office_model.make_future_dataframe(periods=36, freq='MS') office_forecast = office_model.predict(office_forecast) plt.figure(figsize=(18, 6)) furniture_model.plot(furniture_forecast, xlabel = 'Date', ylabel = 'Sales') plt.title('Furniture Sales'); # - plt.figure(figsize=(18, 6)) office_model.plot(office_forecast, xlabel = 'Date', ylabel = 'Sales') plt.title('Office Supplies Sales'); # + # Comparando as previsões furniture_names = ['furniture_%s' % column for column in furniture_forecast.columns] office_names = ['office_%s' % column for column in office_forecast.columns] merge_furniture_forecast = furniture_forecast.copy() merge_office_forecast = office_forecast.copy() merge_furniture_forecast.columns = furniture_names merge_office_forecast.columns = office_names forecast = pd.merge(merge_furniture_forecast, merge_office_forecast, how = 'inner', left_on = 'furniture_ds', right_on = 'office_ds') forecast = forecast.rename(columns={'furniture_ds': 'Date'}).drop('office_ds', axis=1) plt.figure(figsize=(10, 7)) plt.plot(forecast['Date'], forecast['furniture_trend'], 'b-') plt.plot(forecast['Date'], forecast['office_trend'], 'r-') plt.legend(); plt.xlabel('Date'); plt.ylabel('Sales') plt.title('Furniture vs. Office Supplies Sales Trend'); # - plt.figure(figsize=(10, 7)) plt.plot(forecast['Date'], forecast['furniture_yhat'], 'b-') plt.plot(forecast['Date'], forecast['office_yhat'], 'r-') plt.legend(); plt.xlabel('Date'); plt.ylabel('Sales') plt.title('Furniture vs. Office Supplies Estimate'); furniture_model.plot_components(furniture_forecast); office_model.plot_components(office_forecast); # É bom ver que as vendas de móveis e material de escritório aumentaram linearmente ao longo do tempo e continuarão crescendo, embora o crescimento do material de escritório pareça um pouco mais forte. # # O pior mês para móveis é abril, o pior mês para material de escritório é fevereiro. # O melhor mês para móveis é dezembro, e o melhor mês para material de escritório é outubro. # # Existem muitas análises de séries temporais que podemos explorar a partir de agora, como previsão com limites de incerteza, ponto de mudança e detecção de anomalias, previsão de séries temporais com fonte de dados externa. Nós apenas começamos. # # ### Parte 2 - B # # Usando fbprophet para dados de ações. # %matplotlib inline import matplotlib.pyplot as plt # + import quandl # quandl for financial data import pandas as pd quandl.ApiConfig.api_key = '' #getyourownkey! # Retrieve TSLA data from Quandl tesla = quandl.get('WIKI/TSLA') # Retrieve the GM data from Quandl gm = quandl.get('WIKI/GM') gm.head(5) # - gm.to_csv('gm.csv') tesla.to_csv('tesla.csv') # Existe uma quantidade quase ilimitada de dados no quandl, mas eu queria me concentrar em comparar duas empresas dentro do mesmo setor, a Tesla e a General Motors. A Tesla é uma empresa fascinante não apenas porque é a primeira empresa americana de sucesso em 111 anos, mas também porque, em 2017, era a empresa de automóveis mais valiosa dos Estados Unidos, apesar de vender apenas quatro carros diferentes. A outra concorrente do título de empresa de automóveis mais valiosa é a General Motors, que recentemente mostrou sinais de abraçar o futuro dos carros construindo alguns veículos totalmente elétricos (mas não com aparência legal). # # # ## Exploração de Dados # # Antes de entrarmos na modelagem, é melhor ter uma ideia da estrutura e dos intervalos fazendo alguns gráficos exploratórios. Isso também nos permite procurar outliers ou valores ausentes que precisam ser corrigidos. Se qualquer um dos códigos gráficos parecer intimidador, não se preocupe. Eu também acho que o matplotlib não é intuitivo e, muitas vezes, copia e cola exemplos do Stack Overflow ou da documentação para obter o gráfico que quero. Uma das regras da programação é não reinventar uma solução que já existe! # The adjusted close accounts for stock splits, so that is what we should graph plt.plot(gm.index, gm['Adj. Close']) plt.title('GM Stock Price') plt.ylabel('Price ($)'); plt.show() plt.plot(tesla.index, tesla['Adj. Close'], 'r') plt.title('Tesla Stock Price') plt.ylabel('Price ($)'); plt.show(); tesla.set_index('Date', inplace=True) gm.set_index('Date', inplace=True) # + # Adicionando o total de ações (para descobrirmos o preço total das empresas) # Yearly average number of shares outstanding for Tesla and GM tesla_shares = {2018: 168e6, 2017: 162e6, 2016: 144e6, 2015: 128e6, 2014: 125e6, 2013: 119e6, 2012: 107e6, 2011: 100e6, 2010: 51e6} gm_shares = {2018: 1.42e9, 2017: 1.50e9, 2016: 1.54e9, 2015: 1.59e9, 2014: 1.61e9, 2013: 1.39e9, 2012: 1.57e9, 2011: 1.54e9, 2010:1.50e9} # Create a year column tesla['Year'] = tesla.index.year gm['Year'] = gm.index.year # Take Dates from index and move to Date column tesla.reset_index(level=0, inplace = True) tesla['cap'] = 0 gm.reset_index(level=0, inplace = True) gm['cap'] = 0 # Calculate market cap for all years for i, year in enumerate(tesla['Year']): # Retrieve the shares for the year shares = tesla_shares.get(year) # Update the cap column to shares times the price tesla.loc[i, 'cap'] = shares * tesla.loc[i, 'Adj. Close'] # Calculate market cap for all years for i, year in enumerate(gm['Year']): # Retrieve the shares for the year shares = gm_shares.get(year) # Update the cap column to shares times the price gm.loc[i, 'cap'] = shares * gm.loc[i, 'Adj. Close'] # - # Isso cria uma coluna "cap" para Tesla. Nós fazemos o mesmo processo com os dados do GM e depois mesclamos os dois. A mesclagem é uma parte essencial de um fluxo de trabalho de ciência de dados porque nos permite unir conjuntos de dados em uma coluna compartilhada. Nesse caso, temos preços de ações para duas empresas diferentes nas mesmas datas e, portanto, queremos juntar os dados na coluna de data. Realizamos uma mesclagem "interna" para salvar apenas as entradas de Data presentes nos dois quadros de dados. Após a fusão, renomeamos as colunas para saber qual delas combina com a empresa do carro. # + # Merge the two datasets and rename the columns cars = gm.merge(tesla, how='inner', on='Date') cars.rename(columns={'cap_x': 'gm_cap', 'cap_y': 'tesla_cap'}, inplace=True) # Select only the relevant columns cars = cars.loc[:, ['Date', 'gm_cap', 'tesla_cap']] # Divide to get market cap in billions of dollars cars['gm_cap'] = cars['gm_cap'] / 1e9 cars['tesla_cap'] = cars['tesla_cap'] / 1e9 cars.head() # - # O valor de mercado está em bilhões de dólares. Podemos ver que a General Motors começou nosso período de análise com um valor de mercado de cerca de 30 vezes o da Tesla! As coisas permanecem assim ao longo de toda a linha do tempo? # plt.figure(figsize=(10, 8)) plt.plot(cars['Date'], cars['gm_cap'], 'b-', label = 'GM') plt.plot(cars['Date'], cars['tesla_cap'], 'r-', label = 'TESLA') plt.xlabel('Date'); plt.ylabel('Market Cap (Billions $)'); plt.title('Market Cap of GM and Tesla') plt.legend(); # Observamos uma ascensão meteórica para a Tesla e um pequeno aumento para a General Motors ao longo dos dados. Tesla ultrapassa a GM em valor durante 2017! # import numpy as np # Find the first and last time Tesla was valued higher than GM first_date = cars.loc[np.min(list(np.where(cars['tesla_cap'] > cars['gm_cap'])[0])), 'Date'] last_date = cars.loc[np.max(list(np.where(cars['tesla_cap'] > cars['gm_cap'])[0])), 'Date'] print("Tesla was valued higher than GM from {} to {}.".format(first_date.date(), last_date.date())) # Durante esse período, a Tesla vendeu cerca de 48.000 carros, enquanto a GM vendeu 1.500.000. GM foi avaliado menos que Tesla durante um período em que vendeu 30 vezes mais carros! Isso definitivamente mostra o poder de um executivo persuasivo e um produto de alta qualidade - se extremamente baixo em quantidade. Embora o valor de Tesla seja agora menor do que a GM, uma boa pergunta pode ser: podemos esperar que a Tesla supere novamente a GM? Quando isso acontecerá? Para isso, recorremos a modelos aditivos para previsão, ou seja, para prever o futuro. # # ### Modelando com fbprophet # # O pacote prophet do Facebook foi lançado em 2017 para Python e R, e os cientistas de dados de todo o mundo se alegraram. Profeta é projetado para analisar séries temporais com observações diárias que exibem padrões em diferentes escalas de tempo. Ele também possui recursos avançados para modelar os efeitos de feriados em uma série de tempo e implementar pontos de mudança customizados, mas vamos nos ater às funções básicas para colocar um modelo em funcionamento. O profeta, como o quandl, pode ser instalado com o pip na linha de comando. # # Primeiro importamos o profeta e renomeamos as colunas em nossos dados para o formato correto. A coluna Data deve ser chamada de "ds" e a coluna de valor que queremos prever "y". Em seguida, criamos modelos de profeta e os adequamos aos dados, da mesma forma que um modelo de aprendizado de máquina do Scikit-Learn: # conda install -c conda-forge fbprophet import fbprophet # Prophet requires columns ds (Date) and y (value) gm = gm.rename(columns={'Date': 'ds', 'cap': 'y'}) # Put market cap in billions gm['y'] = gm['y'] / 1e9 # Make the prophet model and fit on the data gm_prophet = fbprophet.Prophet(changepoint_prior_scale=0.15) gm_prophet.fit(gm) # Ao criar os modelos profeta, configurei o ponto de mudança antes de 0,15, acima do valor padrão de 0,05. Esse hiperparâmetro é usado para controlar a sensibilidade da tendência às mudanças, com um valor mais alto sendo mais sensível e um valor menor menos sensível. # # Esse valor é usado para combater um dos mais importantes trade-offs no aprendizado de máquina: bias vs. variância. Se nos ajustarmos muito de perto aos nossos dados de treinamento, chamados overfitting, temos muita variação e nosso modelo não será capaz de generalizar bem com novos dados. # # Por outro lado, se o nosso modelo não capturar as tendências em nossos dados de treinamento, ele está sofrendo mal e tem muito preconceito. Quando um modelo está sub-ajustado, o aumento do ponto de mudança antes permite mais flexibilidade para o modelo se ajustar aos dados e, se o modelo for overfitting, a redução dos limites anteriores aumenta a flexibilidade. O efeito da escala anterior do ponto de mudança pode ser ilustrado pelas previsões gráficas feitas com uma faixa de valores: # # <img src='./img/1_jEFOLncknBJ8cPQSBQDktA.png'> # # # Quanto maior a escala anterior do ponto de mudança, mais flexível o modelo e mais próximo ele se encaixa nos dados de treinamento. Isso pode parecer exatamente o que queremos, mas o aprendizado dos dados de treinamento muito bem pode levar ao overfitting e à incapacidade de fazer previsões com precisão em novos dados. # # Portanto, precisamos encontrar o equilíbrio certo entre os dados de treinamento e a capacidade de generalizar para novos dados. Como os estoques variam do dia-a-dia, e queremos que nosso modelo capture isso, eu aumentei a flexibilidade depois de experimentar vários valores. Na chamada para criar um modelo de profeta, também podemos especificar pontos de mudança, que ocorrem quando uma série temporal passa de crescente para decrescente, ou de aumentar lentamente para aumentar rapidamente (eles estão localizados onde a mudança de taxa na série temporal é maior) . # # Pontos de mudança podem corresponder a eventos significativos, como lançamentos de produtos ou oscilações macroeconômicas no mercado. Se não especificarmos pontos de mudança, o profeta os calculará para nós. Para fazer previsões, precisamos criar o que é chamado de dataframe futuro. Nós especificamos o número de períodos futuros para prever (dois anos) e a frequência das previsões (diárias). Em seguida, fazemos previsões com o modelo de profeta que criamos e o futuro do dataframe: # # Make a future dataframe for 2 years gm_forecast = gm_prophet.make_future_dataframe(periods=365 * 2, freq='D') # Make predictions gm_forecast = gm_prophet.predict(gm_forecast) # Nossos dataframes futuros contêm o limite de mercado estimado de Tesla e GM para os próximos dois anos. Podemos visualizar previsões com a função de plotagem do profeta. gm_prophet.plot(gm_forecast, xlabel = 'Date', ylabel = 'Market Cap (billions $)') plt.title('Market Cap of GM'); # Os pontos pretos representam os valores reais (observe como eles param no início de 2018), a linha azul indica os valores previstos e a região sombreada azul clara é a incerteza (sempre uma parte crítica de qualquer previsão). # # A região da incerteza aumenta quanto mais longe no futuro a previsão é feita porque a incerteza inicial se propaga e cresce com o tempo. Isto é observado nas previsões meteorológicas que se tornam menos precisas quanto mais longe no tempo em que são feitas. Também podemos inspecionar pontos de mudança identificados pelo modelo. # # Novamente, os pontos de mudança representam quando a taxa de crescimento das séries temporais muda significativamente (passa de crescente a decrescente, por exemplo). # gm_prophet.plot(gm_forecast, xlabel = 'Date', ylabel = 'Market Cap (billions $)') tesla_prophet.changepoints[:10] # Para comparação, podemos analisar as Tendências de pesquisa do Google para Tesla nesse período para ver se as alterações estão alinhadas. Nós plotamos os pontos de mudança (linhas verticais) e as tendências de pesquisa no mesmo gráfico: # Load in the data tesla_search = pd.read_csv('data/tesla_search_terms.csv') # Convert month to a datetime tesla_search['Month'] = pd.to_datetime(tesla_search['Month']) tesla_changepoints = [str(date) for date in tesla_prophet.changepoints] # Plot the search frequency plt.plot(tesla_search['Month'], tesla_search['Search'], label = 'Searches') # Plot the changepoints plt.vlines(tesla_changepoints, ymin = 0, ymax= 100, colors = 'r', linewidth=0.6, linestyles = 'dashed', label = 'Changepoints') # Formatting of plot plt.grid('off'); plt.ylabel('Relative Search Freq'); plt.legend() plt.title('Tesla Search Terms and Changepoints'); # Alguns dos pontos de mudança no valor de mercado de Tesla se alinham com as mudanças na frequência das pesquisas Tesla, mas não em todas elas. A partir disso, eu diria que a frequência de pesquisa relativa do Google não é um ótimo indicador de alterações de estoque. # # # Ainda precisamos descobrir quando a capitalização de mercado da Tesla ultrapassará a da General Motors. Como temos as duas previsões para os próximos dois anos, podemos plotar as duas empresas no mesmo gráfico depois de mesclar os dataframes. Antes de mesclar, renomeamos as colunas para acompanhar os dados. # gm_names = ['gm_%s' % column for column in gm_forecast.columns] tesla_names = ['tesla_%s' % column for column in tesla_forecast.columns] # Dataframes to merge merge_gm_forecast = gm_forecast.copy() merge_tesla_forecast = tesla_forecast.copy() # Rename the columns merge_gm_forecast.columns = gm_names merge_tesla_forecast.columns = tesla_names # Merge the two datasets forecast = pd.merge(merge_gm_forecast, merge_tesla_forecast, how = 'inner', left_on = 'gm_ds', right_on = 'tesla_ds') # Rename date column forecast = forecast.rename(columns={'gm_ds': 'Date'}).drop('tesla_ds', axis=1) # Primeiro vamos traçar apenas a estimativa. A estimativa (chamada "yhat" no pacote do profeta) suaviza parte do ruído nos dados para que pareça um pouco diferente dos gráficos brutos. O nível de suavidade dependerá da escala anterior do ponto de mudança - prévias mais altas significam um modelo mais flexível e mais altos e baixos. # # <img src='./img/1_wtXXjTJK2J9MQFFkyGyhwA.png'> # # Nosso modelo acredita que a breve ultrapassagem da GM pela Tesla em 2017 foi apenas ruído, e não é até o início de 2018 que a Tesla supera a GM definitivamente na previsão. A data exata é 27 de janeiro de 2018, então, se isso acontecer, eu ficarei feliz em receber o crédito por prever o futuro! # # Ao fazer o gráfico acima, deixamos de fora a parte mais importante de uma previsão: a incerteza! Podemos usar o matplotlib (veja o caderno) para mostrar as regiões de dúvida: # # <img src='./img/1_0rt_W8NzoFG_WQ0I9mncyg.png'> # # # Esta é uma representação melhor da previsão. Isso mostra que o valor de ambas as empresas deve aumentar, mas a Tesla vai aumentar mais rapidamente do que a General Motors. Mais uma vez, a incerteza aumenta com o tempo como esperado para uma previsão e o limite inferior de Tesla está abaixo do limite superior da GM em 2020, significando que a GM poderia manter a liderança. Tendências e Padrões O último passo da análise de capitalização de mercado é analisar a tendência geral e os padrões. Profeta nos permite visualizar facilmente a tendência geral e os padrões de componentes: # # Plot the trends and patterns gm_prophet.plot_components(gm_forecast) # A tendência é bastante clara: as ações da GM estão subindo e indo continuar subindo. O padrão anual é interessante porque parece sugerir que a GM aumenta de valor no final do ano com um declínio longo e lento no verão. Podemos tentar determinar se existe uma correlação entre o valor de mercado anual e as vendas médias mensais da GM ao longo do período de tempo. Primeiro, juntei as vendas mensais de veículos do Google e calculei a média ao longo dos meses usando groupby. Essa é outra operação crítica da ciência de dados, porque muitas vezes queremos comparar estatísticas entre categorias, como usuários de uma faixa etária específica ou veículos de um fabricante. Nesse caso, queremos calcular as vendas médias em cada mês, então agrupamos os meses e, em seguida, calculamos a média das vendas. # gm_sales_grouped = gm_sales.groupby('Month').mean() # Não parece que as vendas mensais estão correlacionadas com o valor de mercado. As vendas mensais são as segundas mais altas em agosto, o que é justo no ponto mais baixo para o valor de mercado! Olhando para a tendência semanal, não parece haver nenhum sinal significativo (não há preços de ações registrados nos finais de semana, então olhamos para a mudança durante a semana). Isso é de se esperar, como a teoria da caminhada aleatória em economia afirma lá. Não há um padrão previsível nos preços das ações em uma base diária. # # Como evidenciado por nossa análise, no longo prazo, as ações tendem a aumentar, mas na escala do dia-a-dia quase não há um padrão que possamos aproveitar, mesmo com os melhores modelos. Um simples olhar para o Dow Jones Industrial Average (um índice de mercado das 30 maiores empresas da bolsa de valores) ilustra bem esse ponto: # # <img src='./img/1_5OHpAvp_w5g7jccqJ8OYaA (1).png'> # # # Claramente, a mensagem é voltar a 1900 e investir seu dinheiro! Ou, na realidade, quando o mercado cai, não se retire porque ele voltará de acordo com o histórico. Na escala global, as flutuações do dia-a-dia são muito pequenas para serem vistas e se estamos pensando como cientistas de dados, percebemos que jogar ações cotidianas é tolo comparado a investir em todo o mercado e manter por longos períodos de tempo . # # O Profeta também pode ser aplicado a medidas de dados em larga escala, como o Produto Interno Bruto, uma medida do tamanho total da economia de um país. Fiz a seguinte previsão criando modelos proféticos baseados no PIB histórico dos EUA e da China. # # <img src='./img/1_1I9G9ek3oXmuS2Fa9KVf9g.png' > # # A data exata em que a China ultrapassará os EUA no PIB é 2036! Esse modelo é limitado por causa da baixa frequência das observações (o PIB é medido uma vez por trimestre, mas o profeta funciona melhor com dados diários), mas fornece uma previsão básica sem o conhecimento macroeconômico necessário. # # # Há muitas maneiras de modelar séries temporais, desde a regressão linear simples até redes neurais recorrentes com células LSTM. Modelos aditivos são úteis porque são rápidos de desenvolver, rápidos de treinar, fornecem padrões interpretáveis e fazem previsões com incertezas. As capacidades do Profeta são impressionantes e nós apenas arranhamos a superfície aqui. Encorajo-o a usar este artigo e o caderno para explorar alguns dos dados oferecidos por Quandl ou sua própria série temporal. # # Fique atento para futuros trabalhos sobre análise de séries temporais, e para uma aplicação do profeta em minha vida diária, veja meu post sobre como usar essas técnicas para modelar e prever a mudança de peso. Como primeiro passo para explorar as séries temporais, os modelos aditivos em Python são o caminho a percorrer! # # ### Parte 3 # # <img src='./img/0_4XXSSYy4nYDDgNex.jpg' > # + from xgboost import XGBRegressor xgb = XGBRegressor() xgb.fit(X_train_scaled, y_train) plotModelResults(xgb, X_train=X_train_scaled, X_test=X_test_scaled, plot_intervals=True, plot_anomalies=True) # -
44,492
/Data Preparation/models_nb2.ipynb
b7bcbd514bb9a5ce70bf32d2a7f0a501299643dc
[]
no_license
CJianYu98/Covid-19-Singapore-Analysis
https://github.com/CJianYu98/Covid-19-Singapore-Analysis
2
1
null
2021-02-12T04:01:31
2021-02-12T03:41:16
Python
Jupyter Notebook
false
false
.py
993,637
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="0JWAGA2ZEn5e" colab_type="text" # # ML Zero to Hero: Fashion MNIST # I want to rebuild my confidence in programming and finally get started on my ML project. This is the second mini-project on my path to building really cool stuff. # # This mini-project is literally stolen from the ["Train your first neural network: basic classification."](https://www.tensorflow.org/beta/tutorials/keras/basic_classification) # + id="Y7P8nrQFFTGw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="5e771b40-223b-4ea8-831a-69b767b7b8bf" # !pip install -q tensorflow==2.0.0-beta1 # + id="AM698x-T-_Sb" colab_type="code" colab={} import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt # + id="dLpvTrgVFrMq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="c59eaec5-a25c-4866-b480-05b275e6cb41" # Load the fashion MNIST dataset fashion_mnist = keras.datasets.fashion_mnist # Separate the dataset into training and test data (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # + [markdown] id="9T5Q-lkzGiHE" colab_type="text" # | Label | Class | # |-------|-------------| # | 0 | T-shirt/top | # | 1 | Trouser | # | 2 | Pullover | # | 3 | Dress | # | 4 | Coat | # | 5 | Sandal | # | 6 | Shirt | # | 7 | Sneaker | # | 8 | Bag | # | 9 | Ankle boot | # + id="nhrrb6zpGdLi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6bf1bc0d-94f2-4b9a-bee7-15992ac7d3a5" # Should be 60,000 training images, width of 28, height of 28 train_images.shape # + id="9-7S_1tpGztu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="5d98c6b5-976e-4f22-da3f-e1616cc34375" plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() # + id="5cDQXZAtG-Ge" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 592} outputId="f4ae5926-a9e7-42f0-9178-67c12a884317" # Preprocessing to normalize the data between 0 and 1 train_images = train_images / 255.0 test_images = test_images / 255.0 # Verify that the data isn't actually garbage plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() # + id="imcXxIlPIRZb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="b8b97590-4213-4957-f90f-412ce6d85e0c" model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # + id="ATVbqEVBNU5b" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 714} outputId="8ab393dc-7f46-46b0-f2e1-61a446092c63" model.fit(train_images, train_labels, epochs=20) # + id="ELr5Oo2rNciw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="5ed0e803-f02f-4b47-9b58-5d1d78419ea8" test_loss, test_acc = model.evaluate(test_images, test_labels) print(f'\nTest accuracy: {test_acc}') # + id="pOYP7ntuTTlm" colab_type="code" colab={} predictions = model.predict(test_images) # + id="s8GfrbRtTZNk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="168e17b8-abeb-473a-c811-ec893fdb20dc" # Highest confidence value print(f'Predicted: {np.argmax(predictions[0])}\nActual: {test_labels[0]}') # + id="x20J3ScqTbKn" colab_type="code" colab={} def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') # + id="HbzeOQZlUxFM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="3bc731d6-7bd4-4d21-d204-5f13b9ec079a" # Look at first image and see probabilities i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() # + id="gwYIj7e8U379" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="21938d75-6a73-44d1-a9e0-20ffb109d657" # Look at 13th image and see prediction probabilities i = 12 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() # + id="bMTY_-9fU9P-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 592} outputId="52b4e45a-289c-4e06-efb2-936146787eca" # Plot the first X test images, their predicted labels, and the true labels. # Color correct predictions in blue and incorrect predictions in red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions, test_labels) plt.show() # + id="HHecRnqMVB7L" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 372} outputId="56e1aed5-71ab-4cb8-b2b2-5e4241285830" # Grab an image from the test dataset. img = test_images[0] print(img.shape) # Add the image to a batch where it's the only member. img = (np.expand_dims(img,0)) print(img.shape) predictions_single = model.predict(img) print(predictions_single) plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=45) f = cross_validate(estimator = SVC(kernel="rbf", C=0.95, class_weight='balanced'), X = X_train, y = y_train, scoring=scoring, cv = 5, n_jobs = -1) print_cross_validation_result(cross_val_svc_rbf) # + print('-'*10, 'SVM Sigmoid', '-'*10) cross_val_svc_sigmoid = cross_validate(estimator = SVC(kernel="sigmoid", C=0.06, class_weight='balanced'), X = X_train, y = y_train, scoring=scoring, cv = 5, n_jobs = -1) print_cross_validation_result(cross_val_svc_sigmoid) # + print('-'*10, 'XGBoost', '-'*10) cross_val_svc_xgboost = cross_validate(estimator = xgb.XGBClassifier(use_label_encoder=False, class_weight='balanced'), X = X_train, y = y_train, scoring=scoring, cv = 5, n_jobs = -1) print_cross_validation_result(cross_val_svc_xgboost) # + print('-'*10, 'Naives Bayes', '-'*10) cross_val_svc_nb = cross_validate(estimator = GaussianNB(), X = X_train, y = y_train, scoring=scoring, cv = 5, n_jobs = -1) print_cross_validation_result(cross_val_svc_nb) # - def plot_coefficients(classifier, feature_names, modelname, top_features=20): coef = classifier.coef_.ravel() top_positive_coefficients = np.argsort(coef)[-top_features:] top_negative_coefficients = np.argsort(coef)[:top_features] top_coefficients = np.hstack([top_negative_coefficients, top_positive_coefficients]) # create plot plt.figure(figsize=(16, 6)) plt.title('Important features by %s model' % (modelname), fontsize=20) plt.ylabel('Coefficient', fontsize=18) plt.xlabel('Non-thoughtful posts <<------------------ Important features ------------------>> Thoughtful posts', fontsize=18) colors = ['red' if c < 0 else 'blue' for c in coef[top_coefficients]] plt.bar(np.arange(2 * top_features), coef[top_coefficients], color=colors) feature_names = np.array(feature_names) plt.xticks(np.arange(0, 0 + 2 * top_features), feature_names[top_coefficients], rotation=60, ha='right', fontsize=14) plt.show() svm = LinearSVC(C=0.005, class_weight='balanced') svm.fit(X_train, y_train) plot_coefficients(svm, vectorizer.get_feature_names(), "Linear SVM") # with processed text svm = LinearSVC(C=0.005, class_weight='balanced') svm.fit(X_train, y_train) plot_coefficients(svm, vectorizer.get_feature_names(), "Linear SVM") def classify_and_evaluate(mymodel, X_test): predicted = mymodel.predict(X_test) y_true = y_test y_pred = predicted print(classification_report(y_true, y_pred)) print() for model_name in models_dict: model = models_dict[model_name].fit(X_train, y_train) print('-'*10, model_name, '-'*10) classify_and_evaluate(model, X_test) # with processed text for model_name in models_dict: model = models_dict[model_name].fit(X_train, y_train) print('-'*10, model_name, '-'*10) classify_and_evaluate(model, X_test) # with 6 features for model_name in models_dict: model = models_dict[model_name].fit(X_train, y_train) print('-'*10, model_name, '-'*10) classify_and_evaluate(model, X_test) # with features, no relevance score for model_name in models_dict: model = models_dict[model_name].fit(X_train, y_train) print('-'*10, model_name, '-'*10) classify_and_evaluate(model, X_test) # max_features = 5000 for model_name in models_dict: model = models_dict[model_name].fit(X_train, y_train) print('-'*10, model_name, '-'*10) classify_and_evaluate(model, X_test) # max_features = 15000 for model_name in models_dict: model = models_dict[model_name].fit(X_train, y_train) print('-'*10, model_name, '-'*10) classify_and_evaluate(model, X_test) # + m = LogisticRegression().fit(X_train, y_train) # import pickle # save_classifier = open("best_countvec.pickle","wb") #binary write # pickle.dump(m, save_classifier) # save_classifier.close() # classifier_saved = open("best_countvec.pickle", "rb") #binary read # classifier_load = pickle.load(classifier_saved) # classifier_saved.close() predictions = m.predict(X_test) print(f1_score(y_test,predictions, average='macro')) print(precision_score(y_test,predictions, average='macro')) print(recall_score(y_test,predictions, average='macro')) print(f1_score(y_test,predictions, average='weighted')) # - # # Ensemble learning X_test1 = test_df[['Num Pronouns', 'Average Loglikelihood', 'Length Category', 'Num Verbs', 'Num Discourse Relations']] # X_test2 = test_df[['Num Pronouns', 'Average Loglikelihood', 'Relevance score']] X_test2 = test_df[['Length', 'Average Loglikelihood', 'Num Verbs', 'Num Discourse Relations', 'Relevance score', 'Num Pronouns']] X_test5 = test_df[['Average Loglikelihood','Num Pronouns', 'Relevance score']] X_test6 = test_df[['Length Category', 'Average Loglikelihood', 'Num Verbs','Relevance score', 'Num Pronouns']] y_test1 = test_df[['Thoughtful?']] # + m1 = open("valuable_classifiers/best_f1.pickle", 'rb') # m2 = open("valuable_classifiers/best_recall.pickle", 'rb') m2 = open("valuable_classifiers/best_svm.pickle", 'rb') m3 = open("valuable_classifiers/best_countvec_nofeatures.pickle", 'rb') clf1_load = pickle.load(m1) clf2_load = pickle.load(m2) clf3_load = pickle.load(m3) m1.close() m2.close() m3.close() predictions1 = clf1_load.predict(X_test1) predictions2 = clf2_load.predict(X_test2) predictions3 = clf3_load.predict(X_test) print(f1_score(y_test1, predictions1, average='macro')) print(f1_score(y_test1, predictions2, average='macro')) print(f1_score(y_test, predictions3, average='macro')) print() df = pd.DataFrame({"p1": predictions1, 'p2':predictions2, 'p3':predictions3}) df['final_p'] = round((df['p1'] + df['p2'] + df['p3'])/3) print(accuracy_score(y_test, df['final_p'])) print(f1_score(y_test, df['final_p'], average='macro')) print(precision_score(y_test, df['final_p'], average='macro')) print(recall_score(y_test, df['final_p'], average='macro')) # + m1 = open("valuable_classifiers/best_f1.pickle", 'rb') m2 = open("valuable_classifiers/best_svm.pickle", 'rb') m3 = open("valuable_classifiers/best_countvec_nofeatures.pickle", 'rb') m4 = open("valuable_classifiers/best_countvec_features.pickle", 'rb') m5 = open("valuable_classifiers/best_recall.pickle", 'rb') clf1_load = pickle.load(m1) clf2_load = pickle.load(m2) clf3_load = pickle.load(m3) clf4_load = pickle.load(m4) clf5_load = pickle.load(m5) m1.close() m2.close() m3.close() m4.close() m5.close() predictions1 = clf1_load.predict(X_test1) predictions2 = clf2_load.predict(X_test2) predictions3 = clf3_load.predict(X_test) predictions4 = clf4_load.predict(X_test4) predictions5 = clf5_load.predict(X_test5) print(f1_score(y_test, predictions1, average='macro')) print(f1_score(y_test, predictions2, average='macro')) print(f1_score(y_test, predictions3, average='macro')) print(f1_score(y_test, predictions4, average='macro')) print(f1_score(y_test, predictions5, average='macro')) print() df = pd.DataFrame({"p1": predictions1, 'p2':predictions2, 'p3':predictions3, 'p4':predictions4, 'p5':predictions5}) df['final_p'] = round((df['p1'] + df['p2'] + df['p3'] + df['p4'] + df['p5'])/5) print(accuracy_score(y_test, df['final_p'])) print(f1_score(y_test, df['final_p'], average='macro')) print(precision_score(y_test, df['final_p'], average='macro')) print(recall_score(y_test, df['final_p'], average='macro')) # - df = pd.DataFrame({"p1": predictions1, 'p2':predictions2, 'p3':predictions3, 'p4':predictions4, 'p5':predictions5}) # df = pd.DataFrame({'p2':predictions2, 'p3':predictions3, 'p4':predictions4}) df['final_p'] = round((df['p1'] + df['p2'] + df['p3'] + df['p4'] + df['p5'])/5) df.head() print(f1_score(y_test, df['final_p'], average='macro')) # + df = pd.read_csv(f'/Users/chenjianyu/Desktop/Y2S2/SMT203 Computational Social Sci/Covid-19-Singapore-Analysis/Data/Thoughtful Comments/thoughtful_comments_final(1).csv') best_model = { # 'Unthoughtful sample size': 0, 'Best Model': None, 'Accuracy': 0, 'Precision': 0, 'Recall': 0, 'F1': 0 } # for n in range(205, 1756, 25): df_sample_0 = df[df['Thoughtful?'] == 0].sample(205, random_state=999) df_sample_1 = df[df['Thoughtful?'] == 1].sample(205, random_state=999) df_sample = pd.concat([df_sample_0, df_sample_1]) sentences = df_sample['Comment'].values y = df_sample['Thoughtful?'].values for i, text in enumerate(sentences): text = remove_hashtag_mentions_urls(text) text_tokenize = TweetTokenizer().tokenize(text) text_lower = [w.lower() for w in text_tokenize] text_words_only = [w for w in text_lower if re.search('^[a-z]+$',w)] text_joined = ' '.join(text_words_only) sentences[i] = text_joined sentences_train, sentences_test, y_train, y_test = train_test_split(sentences, y, test_size=0.20, random_state=999) vectorizer = CountVectorizer(max_features=1650, min_df=0.15) # max_features=500, min_df=0.01 vectorizer.fit(sentences_train) X_train = vectorizer.transform(sentences_train) for model_name in models_dict: model = models_dict[model_name] # print(f'{model_name}') # print('-'*50) cross_val_scores = cross_validate(estimator = model, X = X_train, y = y_train, scoring=scoring, cv = 5, n_jobs = -1) # print_cross_validation_result(cross_val_scores) # print() if cross_val_scores['test_f1_macro'].mean() > best_model['F1']: # best_model['Unthoughtful sample size'] = n best_model['Best Model'] = model_name best_model['Accuracy'] = round(cross_val_scores['test_accuracy'].mean(), 3) best_model['Precision'] = round(cross_val_scores['test_precision_macro'].mean(), 3) best_model['Recall'] = round(cross_val_scores['test_recall_macro'].mean(), 3) best_model['F1'] = round(cross_val_scores['test_f1_macro'].mean(), 3) for k, v in best_model.items(): print(f'{k}: {v}') # + df = pd.read_csv(f'/Users/chenjianyu/Desktop/Y2S2/SMT203 Computational Social Sci/Covid-19-Singapore-Analysis/Data/Thoughtful Comments/thoughtful_comments_final.csv') df_sample_0 = df[df['Thoughtful?'] == 0].sample(205, random_state=999) df_sample_1 = df[df['Thoughtful?'] == 1].sample(205, random_state=999) df_sample = pd.concat([df_sample_0, df_sample_1]) sentences = df_sample['Comment'].values y = df_sample['Thoughtful?'].values validation_df = pd.read_csv(f'/Users/chenjianyu/Desktop/Y2S2/SMT203 Computational Social Sci/Covid-19-Singapore-Analysis/Data/Thoughtful Comments/validation_comments_final.csv') validation_sentences = validation_df['Comment'].values validation_y = validation_df['Thoughtful?'].values for i, text in enumerate(sentences): text = remove_hashtag_mentions_urls(text) text_tokenize = TweetTokenizer().tokenize(text) text_lower = [w.lower() for w in text_tokenize] text_words_only = [w for w in text_lower if re.search('^[a-z]+$',w)] text_joined = ' '.join(text_words_only) sentences[i] = text_joined for i, text in enumerate(sentences): text = remove_hashtag_mentions_urls(text) text_tokenize = TweetTokenizer().tokenize(text) text_lower = [w.lower() for w in text_tokenize] text_words_only = [w for w in text_lower if re.search('^[a-z]+$',w)] text_joined = ' '.join(text_words_only) sentences[i] = text_joined best_config = { # 'Max features': 0, 'Best model': None, 'Accuracy': 0, 'Precision': 0, 'Recall': 0, 'F1': 0 } # for n in range(500, 7000, 50): vectorizer = CountVectorizer(max_features=1650, max_df=0.15) # max_features=500, min_df=0.01 vectorizer.fit(sentences) X_train = vectorizer.transform(sentences) validation_X_test = vectorizer.transform(validation_sentences) for model_name in models_dict: model = models_dict[model_name] model.fit(X_train, y) predictions = model.predict(validation_X_test) acc = accuracy_score(validation_y,predictions) # always true label first, then your predicted labels! precision = precision_score(validation_y,predictions) recall = recall_score(validation_y,predictions) f1 = f1_score(validation_y,predictions) if f1 > best_config['F1']: best_config['Best model'] = model_name best_config['Accuracy'] = round(acc, 3) best_config['Precision'] = round(precision, 3) best_config['Recall'] = round(recall, 3) best_config['F1'] = round(f1, 3) for k, v in best_config.items(): print(f'{k}: {v}') # + df = pd.read_csv(f'/Users/chenjianyu/Desktop/Y2S2/SMT203 Computational Social Sci/Covid-19-Singapore-Analysis/Data/Thoughtful Comments/thoughtful_comments_final.csv') df_sample_0 = df[df['Thoughtful?'] == 0].sample(205, random_state=999) df_sample_1 = df[df['Thoughtful?'] == 1].sample(205, random_state=999) df_sample = pd.concat([df_sample_0, df_sample_1]) sentences = df_sample['Comment'].values y = df_sample['Thoughtful?'].values validation_df = pd.read_csv(f'/Users/chenjianyu/Desktop/Y2S2/SMT203 Computational Social Sci/Covid-19-Singapore-Analysis/Data/Thoughtful Comments/validation_comments_final.csv') validation_sentences = validation_df['Comment'].values validation_y = validation_df['Thoughtful?'].values for i, text in enumerate(sentences): text = remove_hashtag_mentions_urls(text) text_tokenize = TweetTokenizer().tokenize(text) text_lower = [w.lower() for w in text_tokenize] text_words_only = [w for w in text_lower if re.search('^[a-z]+$',w)] text_joined = ' '.join(text_words_only) sentences[i] = text_joined for i, text in enumerate(sentences): text = remove_hashtag_mentions_urls(text) text_tokenize = TweetTokenizer().tokenize(text) text_lower = [w.lower() for w in text_tokenize] text_words_only = [w for w in text_lower if re.search('^[a-z]+$',w)] text_joined = ' '.join(text_words_only) sentences[i] = text_joined best_config_min_df = { 'min_df': 0, 'Accuracy': 0, 'Precision': 0, 'Recall': 0, 'F1': 0 } # for n in range(20, 101): # min_df = n/100 # print(n) vectorizer = CountVectorizer(max_features=1650, max_df=0.15) # max_features=500, min_df=0.01 vectorizer.fit(sentences) X_train = vectorizer.transform(sentences) validation_X_test = vectorizer.transform(validation_sentences) model = svm.SVC(kernel='rbf') model.fit(X_train, y) predictions = model.predict(validation_X_test) acc = accuracy_score(validation_y,predictions) # always true label first, then your predicted labels! precision = precision_score(validation_y,predictions) recall = recall_score(validation_y,predictions) f1 = f1_score(validation_y,predictions) if f1 > best_config_min_df['F1']: # best_config_min_df['min_df'] = min_df best_config_min_df['Accuracy'] = round(acc, 3) best_config_min_df['Precision'] = round(precision, 3) best_config_min_df['Recall'] = round(recall, 3) best_config_min_df['F1'] = round(f1, 3) for k, v in best_config_min_df.items(): print(f'{k}: {v}') # - # # Modelling using TF-IDF (TfidfVectorizer) with 5 features df = pd.read_csv(f'/Users/chenjianyu/Desktop/Y2S2/SMT203 Computational Social Sci/Covid-19-Singapore-Analysis/Data/Thoughtful Comments/thoughtful_comments_final.csv') df_sample_0 = df[df['Thoughtful?'] == 0].sample(205, random_state=999) df_sample_1 = df[df['Thoughtful?'] == 1].sample(205, random_state=999) df_sample = pd.concat([df_sample_0, df_sample_1]) sentences = df_sample['Comment'].values y = df_sample['Thoughtful?'].values count_vectorizer = TfidfVectorizer(analyzer="word", preprocessor=None, max_features=None) # + for i, text in enumerate(sentences): text = remove_hashtag_mentions_urls(text) text_tokenize = TweetTokenizer().tokenize(text) text_lower = [w.lower() for w in text_tokenize] text_words_only = [w for w in text_lower if re.search('^[a-z]+$',w)] text_joined = ' '.join(text_words_only) sentences[i] = text_joined sentences_train, sentences_test, y_train, y_test = train_test_split(sentences, y, test_size=0.20, random_state=999) # + tfidf = count_vectorizer.fit_transform(sentences_train) len(count_vectorizer.get_feature_names()) # - svd = TruncatedSVD(n_components=100, n_iter=25, random_state=12) truncated_tfidf = svd.fit_transform(tfidf) len(truncated_tfidf) for model_name in models_dict: model = models_dict[model_name] probas = cross_val_predict(model, truncated_tfidf, y_train, cv=StratifiedKFold(), n_jobs=-1, method='predict_proba', verbose=2) pred_indices = np.argmax(probas, axis=1) classes = np.unique(y_train) preds = classes[pred_indices] print(model_name) print('Log loss: {}'.format(log_loss(y_train, probas))) print('Accuracy: {}'.format(accuracy_score(y_train, preds))) print('Precision: {}'.format(precision_score(y_train, preds))) print('Recall: {}'.format(recall_score(y_train, preds))) print('F1: {}'.format(f1_score(y_train, preds))) skplt.plot_confusion_matrix(y_train, preds) print() print() df_sample_0 = df[df['Thoughtful?'] == 0].sample(205, random_state=999) df_sample_1 = df[df['Thoughtful?'] == 1].sample(205, random_state=999) df_sample = pd.concat([df_sample_0, df_sample_1]) sentences = df_sample['Comment'].values y = df_sample['Thoughtful?'].values features = df_sample[['Length', 'Average Loglikelihood', 'Num Verbs', 'Num Discourse Relations', 'Relevance score']] # + for i, text in enumerate(sentences): text = remove_hashtag_mentions_urls(text) text_tokenize = TweetTokenizer().tokenize(text) text_lower = [w.lower() for w in text_tokenize] text_words_only = [w for w in text_lower if re.search('^[a-z]+$',w)] text_joined = ' '.join(text_words_only) sentences[i] = text_joined features['sentences'] = sentences # features.reset_index(inplace=True) features_train, features_test, y_train, y_test = train_test_split(features, y, test_size=0.20, random_state=999) sentences_train = features_train['sentences'].values sentences_test = features_test['sentences'].values # + v_tfidf = count_vectorizer.fit_transform(sentences_train) len(count_vectorizer.get_feature_names()) # - svd = TruncatedSVD(n_components=100, n_iter=25, random_state=12) truncated_tfidf = svd.fit_transform(tfidf) i = 0 truncated_tfidf_features = [] for row in features_train.iterrows(): temp = list(truncated_tfidf[i]) temp.append(row[1]['Length']) temp.append(row[1]['Average Loglikelihood']) temp.append(row[1]['Num Verbs']) temp.append(row[1]['Num Discourse Relations']) temp.append(row[1]['Relevance score']) truncated_tfidf_features.append(temp) truncated_tfidf = np.array(truncated_tfidf_features) for model_name in models_dict: model = models_dict[model_name] probas = cross_val_predict(model, truncated_tfidf, y_train, cv=StratifiedKFold(), n_jobs=-1, method='predict_proba', verbose=2) pred_indices = np.argmax(probas, axis=1) classes = np.unique(y_train) preds = classes[pred_indices] print(model_name) print('Log loss: {}'.format(log_loss(y_train, probas))) print('Accuracy: {}'.format(accuracy_score(y_train, preds))) print('Precision: {}'.format(precision_score(y_train, preds))) print('Recall: {}'.format(recall_score(y_train, preds))) print('F1: {}'.format(f1_score(y_train, preds))) skplt.plot_confusion_matrix(y_train, preds) print() print() # + df = pd.read_csv(f'/Users/chenjianyu/Desktop/Y2S2/SMT203 Computational Social Sci/Covid-19-Singapore-Analysis/Data/Thoughtful Comments/thoughtful_comments_final.csv') df_sample_0 = df[df['Thoughtful?'] == 0].sample(205, random_state=999) df_sample_1 = df[df['Thoughtful?'] == 1].sample(205, random_state=999) df_sample = pd.concat([df_sample_0, df_sample_1]) sentences = df_sample['Comment'].values y = df_sample['Thoughtful?'].values count_vectorizer = TfidfVectorizer(analyzer="word", preprocessor=None, max_features=None) for i, text in enumerate(sentences): text = remove_hashtag_mentions_urls(text) text_tokenize = TweetTokenizer().tokenize(text) text_lower = [w.lower() for w in text_tokenize] text_words_only = [w for w in text_lower if re.search('^[a-z]+$',w)] text_joined = ' '.join(text_words_only) sentences[i] = text_joined sentences_train, sentences_test, y_train, y_test = train_test_split(sentences, y, test_size=0.20, random_state=999) tfidf = count_vectorizer.fit_transform(sentences_train) svd = TruncatedSVD(n_components=200, n_iter=25, random_state=12) truncated_tfidf = svd.fit_transform(tfidf) i = 0 truncated_tfidf_features = [] for row in features_train.iterrows(): temp = list(truncated_tfidf[i]) temp.append(row[1]['Length']) temp.append(row[1]['Average Loglikelihood']) temp.append(row[1]['Num Verbs']) temp.append(row[1]['Num Discourse Relations']) temp.append(row[1]['Relevance score']) truncated_tfidf_features.append(temp) truncated_tfidf = np.array(truncated_tfidf_features) validation_df = pd.read_csv(f'/Users/chenjianyu/Desktop/Y2S2/SMT203 Computational Social Sci/Covid-19-Singapore-Analysis/Data/Thoughtful Comments/validation_comments_final.csv') validation_sentences = validation_df['Comment'].values validation_y = validation_df['Thoughtful?'].values validation_features = validation_df[['Length', 'Average Loglikelihood', 'Num Verbs', 'Num Discourse Relations', 'Relevance score']] for i, text in enumerate(validation_sentences): text = remove_hashtag_mentions_urls(text) text_tokenize = TweetTokenizer().tokenize(text) text_lower = [w.lower() for w in text_tokenize] text_words_only = [w for w in text_lower if re.search('^[a-z]+$',w)] text_joined = ' '.join(text_words_only) validation_sentences[i] = text_joined validation_features['sentences'] = validation_sentences validation_features.reset_index(inplace=True) validation_sentences = validation_features['sentences'].values v_tfidf = count_vectorizer.fit_transform(validation_sentences) svd = TruncatedSVD(n_components=200, n_iter=25, random_state=12) truncated_v_tfidf = svd.fit_transform(v_tfidf) i = 0 truncated_v_tfidf_features = [] for row in validation_features.iterrows(): temp = list(truncated_v_tfidf[i]) temp.append(row[1]['Length']) temp.append(row[1]['Average Loglikelihood']) temp.append(row[1]['Num Verbs']) temp.append(row[1]['Num Discourse Relations']) temp.append(row[1]['Relevance score']) truncated_v_tfidf_features.append(temp) truncated_v_tfidf = np.array(truncated_v_tfidf_features) for model_name in models_dict: model = models_dict[model_name] model.fit(truncated_tfidf, y_train) predictions = model.predict(truncated_v_tfidf) acc = accuracy_score(validation_y,predictions) # always true label first, then your predicted labels! precision = precision_score(validation_y,predictions) recall = recall_score(validation_y,predictions) f1 = f1_score(validation_y,predictions) print(model_name) print('-'*50) print('Accuracy Score for {} is {:.5f}'.format(model_name,acc)) print('Precision Score for {} is {:.5f}'.format(model_name,precision)) print('Recall Score for {} is {:.5f}'.format(model_name,recall)) print('F1 Score for {} is {:.5f}'.format(model_name,f1)) print() # -
30,103
/.ipynb_checkpoints/31-classification-algorithms-checkpoint.ipynb
14cb4db91d0294a5b168c65c6727879cd2c8a498
[]
no_license
peterpuwang/sklearn-multiple-classification-algorithm
https://github.com/peterpuwang/sklearn-multiple-classification-algorithm
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
15,925
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python Assignment 3 # ### **1.1 Write a Python Program to implement your own myreduce() function which works exactly like Python's built-in function reduce()** # + import numpy as np from abc import ABCMeta, abstractmethod class AbstactMyReduceBase(metaclass=ABCMeta): def __init__(self): pass @abstractmethod def my_sum(self): pass @abstractmethod def my_multiply(self): pass @abstractmethod def my_mean(self): pass @abstractmethod def my_reduce(self): pass class MyReduce(AbstactMyReduceBase): def __init__(self): self._func = "" self._iter = "" def my_sum(self): print("Function my_reduce() parameters: my_sum() and {}".format(self._iter)) return np.sum(self._iter) def my_multiply(self): print("Function my_reduce() parameters: my_multiply() and {}".format(self._iter)) prod = 1 for num in self._iter: prod *= num return (prod) def my_mean(self): print("Function my_reduce() parameters: my_mean() and {}".format(self._iter)) return np.mean(self._iter) def my_reduce(self, myfunc, myiter): # Assign the class variables self._func = myfunc self._iter = myiter # Check if the params are function and iterable else raise exception. # Check if the 1st parameter is a function or not. if hasattr(self._func,'__call__'): pass else: raise Exception("'{}' is not a callable function.".format(self._func)) # Check 2nd parameter is iterable or not if hasattr(self._iter,'__iter__'): pass else: raise Exception("'{}' is not an iterable object.".format(self._iter)) # Call the function with argument print("the value of my_reduce function is {}".format(myfunc())) try: MyReduceObj = MyReduce() except Exception as e: print("Exception: Object initialisation error",e) else: print("Object initialised successfully...\n") try: MyReduceObj.my_reduce(MyReduceObj.my_sum,[1,2,3,4,5]) MyReduceObj.my_reduce(MyReduceObj.my_multiply,[1,2,3,4,1,2,3,4]) MyReduceObj.my_reduce(MyReduceObj.my_mean,[1,2,3,4,5]) # Check raising exceptions (UNCOMMENT only one of the below line) #MyReduceObj.my_reduce("my_sum",[1,2,3,4,5]) #MyReduceObj.my_reduce(MyReduceObj.my_sum,12345) except Exception as e: print("{0}:Exception: my_reduce() execution error: {1}".format(type(e),e)) finally: del MyReduceObj print("\nObject deleted successfully...") # - # ### 1.2 Write a Python program to implement your own myfilter() function which works exactly like Python's built-in function filter() # + import numpy as np from abc import ABCMeta, abstractmethod class AbstactMyFilterBase(metaclass=ABCMeta): def __init__(self): pass @abstractmethod def my_filter(self): pass @abstractmethod def is_sr_citizen(self): pass @abstractmethod def is_child(self): pass class MyFilter(AbstactMyFilterBase): def __init__(self): self._queue = [] self._reduced_queue = [] def is_sr_citizen(self): for i in self._queue: if i >= 60: self._reduced_queue.append(i) #print(self._reduced_queue) def is_child(self): for i in self._queue: if i < 18: self._reduced_queue.append(i) #print(self._reduced_queue) def my_filter(self,called_func,passed_queue): self._queue = passed_queue self._reduced_queue = [] called_func() return self._reduced_queue try: sr_citizen = [] children = [] ticket_queue = np.arange(10,80,3) MyFilterObj = MyFilter() except Exception as e: print("{0}:Exception: Object initialisation error: {1}".format(type(e),e)) else: print("Object initialised successfully...\n") try: print("Age of people in ticket queue:{0}".format(ticket_queue)) sr_citizen = MyFilterObj.my_filter(MyFilterObj.is_sr_citizen,ticket_queue) print("The age of senior citizen's (>=60) in ticket queue:{0}".format(sr_citizen)) children = MyFilterObj.my_filter(MyFilterObj.is_child,ticket_queue) print("The age of children(<18) in ticket queue:{0}".format(children)) except Exception as e: print("{0}:Exception: my_filter() execution error: {1}".format(type(e),e)) finally: del MyFilterObj print("\nObject deleted successfully...") # - # ### 2. Implement List comprehensions to produce the following lists. Write List comprehensions to produce the following Lists # + active="" # 2. Implement List comprehensions to produce the following lists. # Write List comprehensions to produce the following Lists # 2a. ['A', 'C', 'A', 'D', 'G', 'I', ’L’, ‘ D’] # 2b. ['x', 'xx', 'xxx', 'xxxx', 'y', 'yy', 'yyy', 'yyyy', 'z', 'zz', 'zzz', 'zzzz'] # 2c. ['x', 'y', 'z', 'xx', 'yy', 'zz', 'xxx', 'yyy', 'zzz', 'xxxx', 'yyyy', 'zzzz'] # 2d. [[2], [3], [4], [3], [4], [5], [4], [5], [6]] # 2e. [[2, 3, 4, 5], [3, 4, 5, 6], [4, 5, 6, 7], [5, 6, 7, 8]] # 2f. [(1, 1), (2, 1), (3, 1), (1, 2), (2, 2), (3, 2), (1, 3), (2, 3), (3, 3)] # + from abc import ABCMeta, abstractmethod class AbstactMyListCompBase(metaclass=ABCMeta): def __init__(self): pass @abstractmethod def my_list_comp_disp(self): pass class MyListComp(AbstactMyListCompBase): def __init__(self,str_a,str_b,list_d,list_e,list_f): self._str_a = str_a self._str_b = str_b self._list_d = list_d self._list_e = list_e self._list_f = list_f def my_list_comp_disp(self): #2a. ['A', 'C', 'A', 'D', 'G', 'I', ’L’, ‘ D’] mylist_a = [] mylist_a = [i for i in self._str_a] print(mylist_a) # 2b. ['x', 'xx', 'xxx', 'xxxx', 'y', 'yy', 'yyy', 'yyyy', 'z', 'zz', 'zzz', 'zzzz'] mylist_b = [] mylist_b = [i*j for i in self._str_b for j in range(1,5)] print(mylist_b) # 2c. ['x', 'y', 'z', 'xx', 'yy', 'zz', 'xxx', 'yyy', 'zzz', 'xxxx', 'yyyy', 'zzzz'] mylist_c = [] mylist_c = [i*j for i in range(1,5) for j in self._str_b] print(mylist_c) # 2d. [[2], [3], [4], [3], [4], [5], [4], [5], [6]] mylist_d = [] mylist_d = [[i+j] for i in self._list_d for j in range(0,3)] print(mylist_d) # 2e. [[2, 3, 4, 5], [3, 4, 5, 6], [4, 5, 6, 7], [5, 6, 7, 8]] mylist_e = [] mylist_e = [[i+j for i in self._list_e] for j in range(0,4)] print(mylist_e) # 2f. [(1, 1), (2, 1), (3, 1), (1, 2), (2, 2), (3, 2), (1, 3), (2, 3), (3, 3)] mylist_f = [] mylist_f = [(j,i) for i in self._list_f for j in self._list_f] print(mylist_f) # End of function "my_list_comp_disp()" try: mystr_a = "ACADGILD" mystr_b = "xyz" mylist_d = [2,3,4] mylist_e = [2,3,4,5] mylist_f = [1,2,3] ListCompObj = MyListComp(mystr_a, mystr_b, mylist_d, mylist_e, mylist_f) except Exception as e: print("{0}:Exception: Object initialisation error: {1}".format(type(e),e)) else: print("Object initialised successfully...\n") try: print("Printing list comprehensions...\n") ListCompObj.my_list_comp_disp() except Exception as e: print("{0}:Exception: Error in function my_list_comp_disp(): {1}".format(type(e),e)) finally: del ListCompObj print("\nObject deleted successfully...") # -
8,038
/notebooks/.ipynb_checkpoints/Ripser Knowl Graph-checkpoint.ipynb
0f04962aef50849595a09192bc4f514096b861e7
[]
no_license
rfunklab/network_structure
https://github.com/rfunklab/network_structure
0
0
null
2022-12-08T06:13:37
2019-10-07T22:40:00
Jupyter Notebook
Jupyter Notebook
false
false
.py
35,933
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import numpy as np import networkx as nx import pandas as pd from ripser import ripser import matplotlib.pyplot as plt upto = 10000 data_loc = '/home/gebhart/projects/rfunklab/data/patents_20190722' collab_graph = nx.Graph() knowl_graph = nx.Graph() collab_df = pd.read_csv(os.path.join(data_loc, 'collaboration_edges.csv'), header=0) knowl_df = pd.read_csv(os.path.join(data_loc, 'knowledge_edges.csv'), header=0) for idx, row in collab_df.iterrows(): if idx >= upto: break collab_graph.add_edge(row['inventor_id_a'], row['inventor_id_b'], weight=row['patents']) # + # for idx, row in knowl_df.iterrows(): # knowl_graph.add_edge(row['subgroup_id_a'], row['subgroup_id_b']) # - len(collab_graph.edges()) # + # len(knowl_graph.edges()) # - len(collab_graph.nodes()) cgm = nx.to_scipy_sparse_matrix(collab_graph) rips = ripser(cgm, distance_matrix=True) rips def plot_diagram(diagram, title=''): fig, ax = plt.subplots() if diagram.size > 0: d = diagram[~np.isinf(diagram[:,1])] if d.shape[0] > 0: ax.scatter(d[:,0], d[:,1], s=25, c=d[:,0]**2 - d[:,1], cmap=plt.cm.coolwarm, zorder=10) lims = [ np.min(d[:,0]-1), # min of both axes np.max(d[:,1]+1), # max of both axes ] # now plot both limits against eachother ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0) ax.set_xlim(lims) ax.set_ylim(lims) ax.set_aspect('equal') plt.xlabel('Birth') plt.ylabel('Death') plt.title(title) for i in range(len(rips['dgms'])): plot_diagram(rips['dgms'][i], title='Persistence Diagram Dimension {}'.format(i))
1,967
/01 Apr Capstone (1).ipynb
0c3a71b8b90c81e9e106f5fc5d7dbe4a69c552bd
[]
no_license
gurmanjit/https-github.com-gurmanjit-WildFires_St.Clair
https://github.com/gurmanjit/https-github.com-gurmanjit-WildFires_St.Clair
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
218,816
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- from __future__ import division import numpy as np import pandas as pd from pandas import Series,DataFrame from math import ceil df=pd.read_csv(r'C:\Users\ranbir\OneDrive - University of Stirling\IceRobotics\pyalgorithms\data_requests\_act_000.csv',header='infer',parse_dates=True,infer_datetime_format=True) df.dtypes df.head() #delete lying bouts- was created while testing a formula in excel del df['lying bouts'] df.head() # <img src="lyingbouts.png" width="400",height="400"> # + #calculate the lying bouts based on 'Stand Start' and number of transitions - 'standing change' def lying_bouts(stdStart,stdChange): #case:lying down for the whole 15:00 mins if stdStart == 0 and stdChange == 0: return 1 #case:standing for the whole 15:00 mins elif stdStart == 1 and stdChange == 0: return 0 #case: even no. of transitions in the 15:00 mins elif stdChange%2==0: #case: intially lying down - add 1 if stdStart == 0: return stdChange/2+1 #case: intially standing - round up lying_bouts else: return ceil(stdChange/2) #case: odd no. of transitions in the 15:00 mins else: return ceil(stdChange/2) # - print "0 initial, 0 transitions:",lying_bouts(0,0),"lying bouts" print "1 initial, 0 transitions:",lying_bouts(1,0),"lying bouts" print "0 initial, even transitions:",lying_bouts(0,4),"lying bouts" print "0 initial, odd transitions:",lying_bouts(0,3),"lying bouts" print "1 initial, even transitions:",lying_bouts(1,4),"lying bouts" print "1 initial, odd transitions:",lying_bouts(1,3),"lying bouts" df['lying bouts']=df.apply(lambda row:lying_bouts(row['Initial State'],row['standing change']),axis=1) df.head() ple --help # As you can see, Datalab allows you to sample a specified count of rows, using various sampling strategies, including random sampling, or hashing. Lets try retrieving a couple samples. # %bigquery sample --table cloud-datalab-samples:httplogs.logs_20140615 --count 5 # %bigquery sample --table cloud-datalab-samples:httplogs.logs_20140615 --count 10 --field timestamp --method hashed # # Querying Data # # Of course, querying BigQuery data using SQL is the mainline scenario. Its always handy to have the BigQuery SQL reference. Datalab makes this easy to access by providing a direct link to the [query reference](https://cloud.google.com/bigquery/query-reference) via the Help button on the top toolbar. # # Lets see an example of this. # + language="sql" # SELECT timestamp, latency # FROM [cloud-datalab-samples:httplogs.logs_20140615] # ORDER BY latency DESC # LIMIT 10 # - # That was a simple query. Now, lets do something a bit more interesting. Lets compute the 50th percentile, 95th percentile and 99th percentile latencies for request processing. # # BigQuery makes this effortless with all of its built-in statistical and analytics functions such as `QUANTILES` and `NTH`. # + language="sql" # SELECT # NTH(50, quantiles) AS latency_50th_percentile, # NTH(95, quantiles) AS latency_95th_percentile, # NTH(99, quantiles) AS latency_99th_percentile # FROM ( # SELECT QUANTILES(latency, 100) AS quantiles # FROM [cloud-datalab-samples:httplogs.logs_20140615] # ) # - # # Visualizing Data # # Beyond tables, it almost always interesting to be able to visualize the data to get a more meaningful view of aggregates, trends and patterns. # # Lets write another query, also using `QUANTILES`. This time the `ROW_NUMBER` function will be used to also include a row number in the output data representing the percentile, and use that as the x-axis value in a chart. In order to reference this SQL query in the chart, it is converted into a SQL module via the `--module` argument. This module can then be that can be passed on to the chart referencing it by name. The chart will execute the contained query. # + magic_args="--module data" language="sql" # SELECT ROW_NUMBER() OVER (ORDER BY time) AS percentile, time # FROM ( # SELECT QUANTILES(LOG10(latency), 50) AS time # FROM [cloud-datalab-samples:httplogs.logs_20140615] # ) # - # %chart columns --data data --fields percentile,time # # Looking Ahead # # There are various other commands such as the ones to import (or load) and export (or extract) data, or to create pipelines (or transform) data to build **ETL pipelines**. # # Datalab allows creating **parameterized queries**, so queries can be modified using variables defined in the notebook during interactive work, or through pipeline commands when these are deployed. Queries can be declared and validated one step-at-a-time to create **composite SQL queries** that harness the full power of BigQuery SQL, while managing authoring complexity. # # All of these BigQuery commands are implemented on top of **Python BigQuery APIs** (in the `gcp.bigquery` Python module) which not only allow you to write arbitrary code and logic while working with BigQuery data, but also integrate SQL and Python, and the Python data analysis libraries such as pandas and matplotlib to perform sophisticated and custom data analysis and visualization tasks. # # These topics will be covered in additional notebooks within the BigQuery tutorials included along with Datalab. data['STATE'].value_counts().head(n=10).plot(kind='barh',color='red') plt.title(' Amount of Fire caused in different states of USA') plt.xlabel('count of fire') plt.ylabel('States of USA') plt.show() plt.show() # + [markdown] _cell_guid="e86edfe7-3ada-4a7a-883a-b9e0b5e1651b" _uuid="67c9f364f21bf6f7ec8d09a99f1ad17367d3f440" # Let us norrow down to top 3 # + _cell_guid="eec30520-755b-4d1b-a77d-149becf0bae0" _uuid="05d45f8001b67691c96966fe4b9b425c7bf7ab98" CA = data[data['STATE']=='CA'] GA = data[data['STATE']=='GA'] TX = data[data['STATE']=='TX'] # + _cell_guid="c31f2045-964a-41b2-8e11-541a4afa2024" _uuid="f7bd095ffb9eef7c9be3d179654cb3f9c527d5a0" CA['STAT_CAUSE_DESCR'].value_counts().plot(kind='barh',color='red',title='causes of fires for CA') plt.xlabel('count of fire') plt.ylabel('Causes of fire in CA') plt.show() plt.show() # + _cell_guid="a771520b-a49a-4e1a-9e2d-a19b32dfa25f" _uuid="96ba5f8d5af37ab9021bac8b8b383d2c007350df" GA['STAT_CAUSE_DESCR'].value_counts().plot(kind='barh',color='red',title='causes of fires for GA') plt.xlabel('count of fire') plt.ylabel('Causes of fire in GA') plt.show() plt.show() # + _cell_guid="476673cb-c8ee-4ced-9478-4c480356dd86" _uuid="851ec8eec7c4b182390aa7dbf1be6f1c2bb3eafb" TX['STAT_CAUSE_DESCR'].value_counts().plot(kind='barh',color='red',title='causes of fires for TX') plt.xlabel('count of fire') plt.ylabel('Cause of fire in TX') plt.show() plt.show() # + [markdown] _cell_guid="5ce059e5-1959-436f-b2d2-67a9a86e03be" _uuid="d7656ff9f18153823ed56632d3c9d14f6e5e0e1c" # Let us create a rough map using scatter plot as we have the Latitude and Longitude # + _cell_guid="f3104147-5f1d-4956-949d-1c94fa8ebf3f" _uuid="23dbac00c3a0a0eec99ac9764e2f1fd912014713" data.plot(kind='scatter',x='LONGITUDE',y='LATITUDE',color='red',alpha=0.3) plt.show() # + [markdown] _cell_guid="b09a0137-505c-46fe-8c2e-574755d6010a" _uuid="a213cf1f9e6fc95d649bab79479c2da3ae9a4dac" # There are lot of categories in this dataset so let us use One Hot Encoding to find the correlation between all these. # + _cell_guid="0f2e332d-cafa-47b5-9560-dcc05d4be256" _uuid="c930f198331177a02e90dd4004f2d2fd65a7e5ac" le = preprocessing.LabelEncoder() data['STAT_CAUSE_DESCR'] = le.fit_transform(data['STAT_CAUSE_DESCR']) data['STATE'] = le.fit_transform(data['STATE']) data['DAY_OF_WEEK'] = le.fit_transform(data['DAY_OF_WEEK']) print(data.head()) # + _cell_guid="a3ed644b-80a3-4bd8-ac6c-cc0772203613" _uuid="d9aff0e4e41263a34f258d8f62160293d7bc6852" def plot_corr(data,size=10): corr = data.corr() #the default method is pearson fig, ax = plt.subplots(figsize=(size, size)) ax.matshow(corr,cmap=plt.cm.Oranges) plt.xticks(range(len(corr.columns)), corr.columns) plt.yticks(range(len(corr.columns)), corr.columns) for tick in ax.get_xticklabels(): tick.set_rotation(45) plt.show() plot_corr(data) # + [markdown] _cell_guid="7f4ee430-3add-4801-933d-41e96e4b2ca3" _uuid="00887bfbb183b46fb896bc8374c6a2a336178f49" # Good correlation between month and latitude, weather and season are related, less correlation between longitude and month # & No Correlation of Target variable with any # - # **Part 3** # + [markdown] _cell_guid="4e04c953-12b4-4318-867d-98e820111068" _uuid="5079f837526a82eb5af2fcc17e9e71556e6139cc" # **Preparing the data for machine learning** # # Dropping the Dates and NA's # + _cell_guid="f7cbef43-7564-4a96-94ab-c790ae43261f" _uuid="5d47f92df7808ab5f65c0e6aeaad63f65cdb9729" data = data.drop('DATE',axis=1) data = data.dropna() # + [markdown] _cell_guid="1e0db330-c479-4f28-af9d-761b356eb728" _uuid="5513c0fd64e98413fd8361ecd687ca465a82f388" # Our Target variable is Cause of Fire(" STAT_CAUSE_DESCR ") # + _cell_guid="e9c3f2b3-a125-4352-add0-95f00d659ad4" _uuid="094d3500463ccf94dc5405c30f695b2b71b1f39f" X = data.drop(['STAT_CAUSE_DESCR'], axis=1).values y = data['STAT_CAUSE_DESCR'].values # - # Logistic Regression # + _cell_guid="e11ac5a4-b955-4942-abe9-d266f6d8ee32" _uuid="e51f77adb3d5e6f307389667b053b207a18c29f0" X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=0) #30% for testing, 70% for training # - regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(X_train,y_train) # Make predictions using the testing set y_pred = regr.predict(X_test) # The coefficients print('Coefficients: \n', regr.coef_) # The mean squared error print('Mean squared error: %.2f' % mean_squared_error(y_test,y_pred)) # The coefficient of determination: 1 is perfect prediction print('Coefficient of determination: %.2f' % r2_score(y_test,y_pred)) regr.fit(X_train,y_train) regr.score(X_train, y_train) regr.fit(X_test,y_test) regr.score(X_test, y_test) # Decision Tree from sklearn import tree from sklearn.tree import DecisionTreeClassifier tree.plot_tree(clf.fit(X_train,y_train)) clf = tree.DecisionTreeRegressor(max_depth=35) clf.fit(X_train, y_train) clf.score(X_train, y_train) clf.fit(X_test, y_test) clf.score(X_test, y_test) # + [markdown] _cell_guid="702432d9-b078-49a8-8998-333082bfd610" _uuid="36069b3bd03f404156e3373d2b0bef67bf5b9b52" # Random Forest # + _cell_guid="fecfae02-1bc4-4919-9f81-819a084ebf98" _uuid="bc393791b1db7ddefd3169dde1b70976505bf134" clf_rf = ske.RandomForestClassifier(n_estimators=50) clf_rf = clf_rf.fit(X_train, y_train) print(clf_rf.score(X_test,y_test)) # - # + [markdown] _cell_guid="ebebc0da-7acf-4422-a6d6-e945550ef182" _uuid="d2fe8d2fbc324b970a4f71b8d3512eea0ca58b47" # Let us narrow down the classes as there are a lot of classes related to the cause of fire and wich could be tideous while predicting the cause of it. # + _cell_guid="f888288f-dfb6-46e8-89e3-0dd422c7e50e" _uuid="1172af6ec3de0ae3c97796860334faeb80672da4" code_folding=[] def set_label(cat): cause = 0 natural = ['Lightning'] accidental = ['Structure','Fireworks','Powerline','Railroad','Smoking','Children','Campfire','Equipment Use','Debris Burning'] malicious = ['Arson'] other = ['Missing/Undefined','Miscellaneous'] if cat in natural: cause = 1 elif cat in accidental: cause = 2 elif cat in malicious: cause = 3 else: cause = 4 return cause data['LABEL'] = data_orig['STAT_CAUSE_DESCR'].apply(lambda x: set_label(x)) # I created a copy of the original data earlier in the kernel data = data.drop('STAT_CAUSE_DESCR',axis=1) print(data.head()) # + [markdown] _cell_guid="eacab390-2e5d-4d80-8572-17e2a1b2428b" _uuid="d059b4e2a30d7520f323286c12788df93eab05f1" # Let us try to predict the LABEL now. # + _cell_guid="8c2a20be-e3ed-4f38-85c2-9f77de6e9d00" _uuid="99fe5637627005e36010fff605e26ff883b738fa" X = data.drop(['LABEL'], axis=1).values y = data['LABEL'].values X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=0) clf_rf = ske.RandomForestClassifier(n_estimators=50) clf_rf = clf_rf.fit(X_train, y_train) print(clf_rf.score(X_test,y_test)) # + [markdown] _cell_guid="4fd3f741-6923-4622-a29e-ed0418c2aec2" _uuid="d65fe3bfd48968103b55ffffe4620a9d23c7914a" # Reducing the Classes did turn out to be a good decision # + _cell_guid="9e6861ec-a5d6-48ca-b911-8c6f92ac8716" _uuid="63d1f0e9a661b72390b0016b330382a617048fe6" from sklearn.metrics import confusion_matrix y_pred = clf_rf.fit(X_train, y_train).predict(X_test) cm = confusion_matrix(y_true=y_test,y_pred=y_pred) print(cm) # + [markdown] _cell_guid="1a72bee7-32e6-4b94-b5b3-8d19c6a2964c" _uuid="9716f941a653ebb1567dcbfb782db41faec49ce6" # Accuracy and Confusion matrix simplified according to Label below # + _cell_guid="1446c6df-3a1e-479c-8451-daa404783135" _uuid="474589469b314e5bf4c77bf2f0d48f20c5e14f6f" cmn = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] fig,ax = plt.subplots(figsize=(10,10)) ax.matshow(cmn,cmap=plt.cm.Oranges,alpha=0.7) for i in range(cmn.shape[0]): for j in range(cmn.shape[1]): ax.text(x=j,y=i,s=cmn[i,j],va='center',ha='center') plt.xlabel('predicted label') plt.ylabel('true label') plt.show() # - # **Part 4** # # Narrowing down to States print(CA.head()) # Create a new field: ARSON # + def set_arson_label(cause): arson = 0 if cause == 'Arson': arson = 1 return arson CA['ARSON'] = CA['STAT_CAUSE_DESCR'].apply(lambda x: set_arson_label(x)) print(CA.head()) # - # We can drop the DATE, STATE, FIRE_SIZE and STAT_CAUSE_DESCR fields and convert the DAY_OF_WEEK to numerical values. # + le = preprocessing.LabelEncoder() CA['DAY_OF_WEEK'] = le.fit_transform(CA['DAY_OF_WEEK']) print(CA.head()) # - # # From Here data1 = pd.read_sql_query("SELECT STAT_CAUSE_DESCR,LATITUDE,LONGITUDE,STATE,FIRE_SIZE FROM 'Fires'", conn) print(data1.head()) data1_orig = data1.copy() # + def set_label(cat): cause = 0 natural = ['Lightning'] accidental = ['Structure','Fireworks','Powerline','Railroad','Smoking','Children','Campfire','Equipment Use','Debris Burning'] malicious = ['Arson'] other = ['Missing/Undefined','Miscellaneous'] if cat in natural: cause = 1 elif cat in accidental: cause = 2 elif cat in malicious: cause = 3 else: cause = 4 return cause data1['LABEL'] = data1_orig['STAT_CAUSE_DESCR'].apply(lambda x: set_label(x)) # I created a copy of the original data earlier in the kernel data1 = data1.drop('STAT_CAUSE_DESCR',axis=1) print(data.head()) # + le = preprocessing.LabelEncoder() data1['STATE'] = le.fit_transform(data1['STATE']) print(data1.head()) # + X = data1.drop(['LABEL'], axis=1).values y = data1['LABEL'].values Xx_train, Xx_test, yy_train, yy_test = train_test_split(X,y,test_size=0.3, random_state=0) # - from sklearn.svm import SVC svc = SVC(kernel = 'rbf', C= 10, gamma =0.1) svc.fit(Xx_train,yy_train) # + from sklearn import svm #Create a svm Classifier clf = svm.SVC(kernel='linear') # Linear Kernel # - #Train the model using the training sets clf.fit(X_train, y_train) # + #Predict the response for test dataset y_pred = clf.predict(X_test) # - from sklearn import metrics print("Accuracy:",metrics.accuracy_score(y_test, y_pred)) print("Precision:",metrics.precision_score(y_test, y_pred)) print("Recall:",metrics.recall_score(y_test, y_pred)) # + [markdown] _cell_guid="81be7d8b-b23d-4594-8e89-2a66baea73b0" _uuid="0188853e4f4ec629f5507b14c9e9cf6d91dceb4e" # We can now test the ML: # - X = CA.drop(['ARSON'], axis=1).values y = CA['ARSON'].values X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=0) #30% for testing, 70% for training clf_rf = ske.RandomForestClassifier(n_estimators=200) clf_rf = clf_rf.fit(X_train, y_train) print(clf_rf.score(X_test,y_test)) # Summary: # Given some basic data, the kind of data available when a fire is first discovered, it is possible to predict with some accuracy if the firs wa the result of arson.
16,420
/1. Evaluating The Performance of Deep Learning Models/Pina_indians_diabetes_automatic_validation_split.ipynb
285e52c47be7970a640a9d97690e9e5e437406fc
[]
no_license
mahfuz87/Learning-Deep-learning
https://github.com/mahfuz87/Learning-Deep-learning
1
0
null
null
null
null
Jupyter Notebook
false
false
.py
28,861
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from keras.models import Sequential from keras.layers import Dense import numpy seed = 5 numpy.random.seed(seed) dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",") X=dataset[:,0:8] Y=dataset[:,8] model = Sequential() model.add(Dense(12, input_dim=8, init='uniform', activation='relu')) model.add(Dense(8, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X,Y, validation_split=.33, nb_epoch=150, batch_size=10) io airplay. The Billboard Hot 100 is still the standard by which a song's popularity is measured today. The Hot 100 is ranked by radio airplay audience impressions as measured by Nielsen BDS, sales data compiled by Nielsen Soundscan (both at retail and digitally) and streaming activity provided by online music sources. # # # As the popularity of music streaming services increases, so does the amount of customization used to recommend songs to the listener. Certainly artist name recognition is a big indicator of whether or not a song will find success on the chart. However, the audio metadata features and lyrical content also play a role. # # ** The goal of this project is to predict the popularity of songs based on their audio metadata and lyrical metadata. ** # ## Who is the Client? # # There are at least two different groups of people who would be interested in the outcome of this project. The first group are music producers who will find it very beneficial to see how musical trends are changing over time when deciding which artists to sign and promote and current artists looking to have their songs chart could benefit from this knowledge during the writing process. # # The second group is music streaming service companies like Spotify since their business model is based around the selection of their library and the power of their music recommendation system. Along with the audio metadata features taken from Spotify's API, we also use lyrical metadata in our model, which could be useful to implement in a music recommendation system. # # 2. Imports #Usual Imports import time from time import sleep import datetime import re import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from pandas.tools.plotting import scatter_matrix # Imports for web scraping Lyrics data from bs4 import BeautifulSoup import urllib import requests # Imports for text analysis from collections import Counter from nltk.corpus import stopwords # Imports for statistics import statsmodels.stats.api as sms import scipy.stats as stats # Imports for machine learning from sklearn.cluster import KMeans # # 3. Datasets # # The Billboard Hot 100 dataset is located at the Ultimate Music Database http://www.umdmusic.com/default.asp?Lang=English&Chart=D. After creating a dataframe of unique artists and tracks from this dataset, we used a library called Spotipy to scrape the metadata from Spotify using the Spotify web API with our unique API credentials. Using this method we obtained 74% of the tracks in the Billboard Hot 100 dataset. However, we discovered that some songs on the Billboard Hot 100 tracks were now associated with the wrong Spotify metadata since some of the results obtained were for karaoke or cover versions of the same track or a different track by the same artist. Thus the dataset would require more delicate cleaning. # # In searching for a finer cleaning approach, we discovered a post by Brady Fowler at Decibels & Decimals https://github.com/dbfowler/billboard_volatility/tree/master/Raw%20Data using a similar dataset. His dataset contained a higher percentage of tracks and he used more sophisticated cleaning methods, so we use his dataset here. The lyric dataset was scraped from various websites. (We described the lyric scraping procedure in the Data Story.) # # According to the Decibels & Decimels website there are some errors in the SpotifyID's attached to each track as well as inconsistent spelling of tracks/artist names. For example, for some tracks in the Billboard Hot 100 the SpotifyID may be linked to a different version of the same song. No attempt was made to correct for these inconsistencies. # # Due to the incompleteness of the dataset for the early years, our Billboard Hot 100 dataset contains the top 100 tracks ranked by popularity for every week between September 8, 1958 and January 8, 2017. # # **Note:** When considering yearly trends we do not include the years 1958 and 2017 due to lack of chart data. billboard_data = pd.read_csv("all_charts.csv",low_memory=False) spotify_data = pd.read_csv("audio_features.csv") billboard_lyrics = pd.read_csv("all_lyrics.csv",encoding='latin1') # # 4. Prepare a Tidy Dataset # For this project the variables we are interested in are: chartDate, title, artist, peakPos, lastPos, weeks, rank, change and spotifyLink from billboard_data and the acousticness, danceability, duration_ms, energy, id, instrumentalness, key, liveness, loudness, mode, speechiness, tempo, time_signature and valence variables from spotify_data. billboard_sub = billboard_data[["chartDate","title","artist","peakPos","lastPos","weeks","rank","change","spotifyID"]] spotify_sub = spotify_data[["acousticness", "danceability", "duration_ms", "energy", "id","instrumentalness", "key", "liveness", "loudness", "mode", "speechiness", "tempo", "time_signature","valence"]] # ## What important fields and information does the data set have? billboard_sub.info() # ## Variable Descriptions # # For the Billboard dataset we focus on the following features. # # ** chartDate:** Always on a Saturday. Represents the ranking for the preceding week. # # ** title: ** Title of the song (Note that this title might be inconsistent with the title from the Spotify dataset.) # # ** artist: ** Artist of the song (Note that if the track features another artist that will be counted separately.) # # ** peakPos: ** The highest position the track ever reached on the charts. # # ** lastPos: ** The previous position on the track. A value of zero could mean that this is a new song that never charted before or that the song is re-entering the charts. # # ** weeks: ** There is some inconsistency with this data. For some entries it is the number of weeks the song has been on the chart up to that point, but for some older entries it is the total number of weeks the song was on the charts. # # ** rank: ** The current rank of the song for that week. # # ** change: ** The change in the rank since the previous week. Songs that were not on the charts the previous week are either "New", "Re-Entry" or "Hot Shot Debut". # # ** spotifyID: ** The SpotifyID for the track for the Billboard dataset. spotify_sub.info() # ## Variable Descriptions # # For the Spotify dataset we focus on the following features. # # ** acousticness: ** A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic. # # ** danceability: ** Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. # # ** duration_ms: ** The duration of the track in milliseconds. # # ** energy: ** Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. # # ** id: ** The Spotify ID for the track for the Spotify audio features dataset. # # ** instrumentalness: ** Predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0. # # ** key: ** The key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. # # ** liveness: ** Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live. # # ** loudness: ** The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db. # # ** mode: ** Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0. # # ** speechiness: ** Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks. # # ** tempo: ** The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration. # # ** time signature: ** An estimated overall time signature of a track. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure). # # ** valence: ** A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). # ## What are its limitations i.e. what are some questions that you cannot answer with this data set? # One of the big limitations of this dataset is the absence of a genre tag for each song. It would be very interesting to see how genre popularity has chaned over time. Another limitation is the lack of an artist popularity metric. Clearly, artist popularity is a big indicator on whether or not a song will be successful. This information is present in other datasets, but we do not pursue it here due to time constraints. # ## What kind of cleaning and wrangling did you need to do? # We now merge the billboard and Spotify datasets together by their spotifyID and id variables respectively. merged_data=pd.merge(billboard_sub,spotify_sub, how="outer",left_on="spotifyID",right_on="id") # Replace weird characters: merged_data=merged_data.replace({'\x83': '?'}, regex=True).replace({'\x82': '?'}, regex=True).replace({'\x80\x9c': '??'}, regex=True) # Subset the lyrics data by only the artist, lyrics, source and track variables. lyrics_sub=billboard_lyrics[["artist","lyrics","track","source"]] # Next merge the merged dataset above with the lyrics dataset. big_dataset=pd.merge(merged_data, lyrics_sub, how="outer",left_on=["artist","title"],right_on=["artist", "track"]) # We then add a year column to facilitate the observation of yearly trends. year = big_dataset.chartDate.replace({'-': ''}, regex=True) year = year.apply(np.float) year = year//10000 pd.set_option('precision', 0) year=year.rename("year") year_data=pd.concat([big_dataset,year],axis=1) year_data.head(3) # # 5. Exploratory Data Analysis on the Billboard Hot 100 chart data # ## How many unique tracks and unique artists make the Billboard Hot 100 each year? # # Note that for tracks that feature multiple artists, we only count the main artist when measuring unique artists. artist_count=year_data.groupby(["artist","title","year"]).mean().reset_index() for row in range(0,len(artist_count)): artist_array = re.split(r'.Featuring+', artist_count.iloc[row,0]) artist_count.iloc[row,0]=artist_array[0] yearly_track_count = artist_count.groupby(["year"]).count().reset_index() yearly_tracks=pd.DataFrame() for year_row in range(1958,2017): year_frame=artist_count[artist_count.year==year_row] unique_artist=len(year_frame.artist.unique()) add_frame = {"year":year_row, "unique_artist_count": unique_artist} yearly_tracks = yearly_tracks.append(add_frame,ignore_index=True) fig, ax = plt.subplots(figsize=(10, 8)) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.set_facecolor('white') ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.plot(yearly_track_count.year, yearly_track_count.artist, color="#3F5D7D", linewidth=0.8,label="Unique Tracks") plt.plot(yearly_tracks.year, yearly_tracks.unique_artist_count, color="red", linewidth=0.8,label="Unique Artists") plt.xlabel('Year', fontsize=12) plt.ylabel('Count', fontsize=12) plt.title('Track/Artist diversity over time', fontsize=14) plt.xlim([1959,2016]) plt.legend() plt.show() # We see that during the 1960's there were many more unique tracks making the Billboard Hot 100. This could be a attributed to the lack of options for people to listen to music and radio was the main medium so listeners would likely tire of the same songs more quickly so it's possible that songs didn't stay on the charts for very long. # # The number of unique tracks steadily declined until 1997 where there was a slight uptick again and the amount increased again through the mid 2000's which could be a result in the change in how song popularity is counted by taking internet download/streaming into account. # # The number of unique artists followed a similar trajectory although the difference was not that large and without a noticeable increase in the 2000's. # # ** We expect to see an inverse relationship between the number of unique tracks in the Billboard Hot 100 and the average number of weeks a track spends on the chart. ** week_count=year_data.groupby(["artist","title", "year"]).max().reset_index() week_average=week_count.groupby(["year"]).mean().reset_index() fig, ax = plt.subplots(figsize=(10, 8)) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.set_facecolor('white') ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.plot(week_average.year, week_average.weeks, color="#3F5D7D", linewidth=0.6) plt.xlabel('Year', fontsize=12) plt.ylabel('Average number of weeks on the charts', fontsize=12) plt.title('Average number of weeks on the chart vs time', fontsize=14) plt.xlim([1959,2016]) plt.show() # As we expected, tracks began spending a lot more time on the charts on average starting in the late 1960's up until the mid 1990's where the average decreased until 2011. # ## What is the relationship between the average number of weeks spent on the chart versus the number of unique tracks? ax= sns.regplot(yearly_track_count.artist,y=week_average.weeks) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.set_facecolor('white') ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.xlabel('Unique tracks in the charts', fontsize=12) plt.ylabel('Average number of weeks on the charts', fontsize=12) plt.title('Average number of weeks on the chart vs unique tracks on the charts per year', fontsize=14) # The limits on the x-axis are chosen to ignore the data from 1958 and 2017 which are incomplete. plt.xlim([350,850]) plt.ylim([5,20]) plt.show() sns.plt.show() # We see the expected negative correlation between the two features. # # Along with unique tracks we are also interested in artists that have only made the Billboard Hot 100 one time, so-called one-hit wonders. We investigate the number of one-hit wonders by year. # + one_hits=year_data.groupby(["artist","title"]).count().reset_index() one_hits=one_hits[["artist","title"]] # We only count the main artist when considering tracks featuring multiple artists. for row in range(0,len(one_hits)): artist_array = re.split(r'.Featuring+', one_hits.iloc[row,0]) one_hits.iloc[row,0]=artist_array[0] one_hits_new=one_hits.groupby(["artist"]).count().reset_index() hit_count=one_hits_new.sort_values(by="title",ascending=False) hit_count.columns=["artist","counted"] # group data by number of hits hit_percent=pd.DataFrame() sum_count=0 for i in range(1,11): percent_count=len(hit_count[hit_count.counted==i])/len(hit_count)*100 sum_count+=percent_count add_frame = {"number_of_hits":str(i), "song_percent": percent_count} hit_percent = hit_percent.append(add_frame,ignore_index=True) add_end_frame={"number_of_hits":">10", "song_percent": 100-sum_count} hit_percent=hit_percent.append(add_end_frame,ignore_index=True) colors = ['#AB1E00','b' , 'y', 'g','pink','#965A0D','orange','purple','#F8F517','grey','turquoise'] labels_song_count = hit_percent.number_of_hits fig, ax = plt.subplots(figsize=(8, 8)) # make the plot square #pie = ax.pie(hit_percent.song_percent, colors=colors, labels=labels_song_count,autopct='%1.1f%%') pie = ax.pie(hit_percent.song_percent, labels=labels_song_count, colors=colors) plt.legend(loc = 'best', labels=['%s Hits, %1.1f %%' % (l, s) for l, s in zip(labels_song_count, hit_percent.song_percent)]) plt.title('Number of Hits in the Billboard Hot 100 per Artist',fontsize=14) plt.tight_layout() plt.show() # - # ## Which artists have the most songs in the Billboard Hot 100? hit_count.head(15) # The big take away from this is how massively popular Glee was. In the five years between 2009 and 2013 they had 205 songs chart on the Billboard Hot 100 which is more than double the number of hits for the next highest ranked artist. Drake and Taylor Swift make up the next two highest positions despite having smaller discographies than a lot of the older artists, but due to the way that songs are currently counted, for big artists today every song on their albums can end up charting whereas in the past not every song on the album was released as a single. This suggests that artist name may be by far the biggest predictor of whether or not a song will perform well on the Billbaord Hot 100. # ## Has the percentage of one hit wonders changed over time? one_hits=artist_count.groupby(["artist"]).count().reset_index() # We only count the main artist when considering tracks featuring multiple artists. for row in range(0,len(one_hits)): artist_array = re.split(r'.Featuring+', one_hits.iloc[row,0]) one_hits.iloc[row,0]=artist_array[0] one_hits_new=one_hits.groupby(["artist"]).sum().reset_index() # We make a logistic column of 1 if the artist has only one track in the Billboard Hot 100 for all years and # and 0 if the artist appears as the main artist on at least two different tracks in any years. one_hits_count=pd.DataFrame() for row in range(0,len(one_hits_new)): if one_hits_new.iloc[row,1]>1: one_hit_id=0 else: one_hit_id=1 add_frame = {"artist":one_hits_new.iloc[row,0], "one_hit_wonder": one_hit_id} one_hits_count = one_hits_count.append(add_frame,ignore_index=True) num_hits=artist_count.groupby(["artist","year"]).count().reset_index() one_hit_merge=pd.merge(one_hits_count,num_hits, how="outer",left_on="artist",right_on="artist") one_hit_percent=one_hit_merge.groupby("year").mean().reset_index() ax = sns.regplot(x="year",y=one_hit_percent.one_hit_wonder*100,data=one_hit_percent) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.set_facecolor('white') ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.plot(one_hit_percent.year, one_hit_percent.one_hit_wonder*100, color="red", linewidth=0.6) plt.xlabel('Year', fontsize=12) plt.ylabel('Percentage of one-hit wonders', fontsize=12) plt.title('Percentage of tracks that are one hit wonders over time', fontsize=14) plt.xlim([1959,2016]) sns.plt.show() plt.show() # ** The percentage of one-hit wonders seems to have decreased over time, but is this really the case? ** # # On December 5, 1998, the Billboard Hot 100 changed from being a "singles" chart to a "songs" chart, which allowed non-single tracks to be counted. We separate the data into two sets 1959-1998 and 1999-2016 and compare the average percentage of one-hit wonders each year. # ## Hypothesis Test: Percentage of one-hit wonders # # $$H_0: \mu_\text{old} - \mu_\text{new} = 0$$ # # $$H_a: \mu_\text{old} - \mu_\text{new} \neq 0$$ old_ones=one_hit_percent[1:41] new_ones=one_hit_percent[41:-1] z,p= stats.ttest_ind(old_ones.one_hit_wonder, new_ones.one_hit_wonder,equal_var=False) cm = sms.CompareMeans(sms.DescrStatsW(old_ones.one_hit_wonder), sms.DescrStatsW(new_ones.one_hit_wonder)) conf_low,conf_high =cm.tconfint_diff(alpha = 0.05,usevar='unequal') print("Z-Statistic: ", z) print("95% Confidence Interval: ", (conf_low,conf_high)) print("p-value: ",p) # We obtain a p-value that is close to zero and with a statistical significance of alpha = 0.05 we obtain a 95% confidence interval that the true mean difference lies between 1.2% and 5.2%. # # ** Thus we reject the null hypothesis. ** # # We conclude that the percentage of tracks on the chart coming from artists that are one-hit wonders has decreased recently but the difference is not very significant. # # 6. Exploratory Data Analysis on the Spotify Audio Metadata # ## What is the percentage of songs with Spotify audio metadata? # First we created a subset of individual tracks from the merged_data dataframe by subsetting by the "title" and "artist" variables. search_data=merged_data.groupby(["title","artist"]).size().reset_index() print("{0:.2f}%".format(len(spotify_data)/len(search_data)*100)) # ** We now plot histograms of several of the Spotify audio metadata features. ** # + fig = plt.figure(figsize=(8,11)) axone=plt.subplot(5,1,1) axone.spines["top"].set_visible(False) axone.spines["right"].set_visible(False) axone.set_facecolor('white') axone.get_xaxis().tick_bottom() axone.get_yaxis().tick_left() plt.hist(big_dataset.acousticness.dropna(), color="#3F5D7D", bins=50,alpha=0.75) plt.xlim(0,1) plt.xlabel("Acousticness", fontsize=14) plt.ylabel('Count', fontsize=14) axtwo=plt.subplot(5,1,2) axtwo.spines["top"].set_visible(False) axtwo.spines["right"].set_visible(False) axtwo.set_facecolor('white') axtwo.get_xaxis().tick_bottom() axtwo.get_yaxis().tick_left() plt.xlim(0,1) plt.hist(big_dataset.danceability.dropna(), color="#006400", bins=50,alpha=0.75) plt.xlabel("Danceability", fontsize=14) plt.ylabel('Count', fontsize=14) axthree=plt.subplot(5,1,3) axthree.spines["top"].set_visible(False) axthree.spines["right"].set_visible(False) axthree.set_facecolor('white') axthree.get_xaxis().tick_bottom() axthree.get_yaxis().tick_left() plt.hist(big_dataset.duration_ms.dropna()/1000, color="#a52a2a", bins=50,alpha=0.75) plt.xlim(50,600) plt.xlabel("Duration (s)", fontsize=14) plt.ylabel('Count', fontsize=14) axfour=plt.subplot(5,1,4) axfour.spines["top"].set_visible(False) axfour.spines["right"].set_visible(False) axfour.set_facecolor('white') axfour.get_xaxis().tick_bottom() axfour.get_yaxis().tick_left() plt.hist(big_dataset.energy.dropna(), color="#ff8c00",bins=50,alpha=0.75) plt.xlim(0,1) plt.xlabel("Energy", fontsize=14) plt.ylabel('Count', fontsize=14) axfive=plt.subplot(5,1,5) axfive.spines["top"].set_visible(False) axfive.spines["right"].set_visible(False) axfive.set_facecolor('white') axfive.get_xaxis().tick_bottom() axfive.get_yaxis().tick_left() plt.hist(big_dataset.instrumentalness.dropna(), color="#8b4513",bins=50,alpha=0.75) plt.xlabel("Instrumentalness", fontsize=14) plt.ylabel('Count', fontsize=14) plt.xlim(0,1) plt.tight_layout() plt.savefig("metadata_histograms.png", bbox_inches="tight"); plt.show() # + fig = plt.figure(figsize=(8, 11)) axone=plt.subplot(5,1,1) axone.spines["top"].set_visible(False) axone.spines["right"].set_visible(False) axone.set_facecolor('white') axone.get_xaxis().tick_bottom() axone.get_yaxis().tick_left() plt.hist(big_dataset.liveness.dropna(), color="#3F5D7D", bins=50,alpha=0.75) plt.xlim(0,1) plt.xlabel("Liveness", fontsize=14) plt.ylabel('Count', fontsize=14) axtwo=plt.subplot(5,1,2) axtwo.spines["top"].set_visible(False) axtwo.spines["right"].set_visible(False) axtwo.set_facecolor('white') axtwo.get_xaxis().tick_bottom() axtwo.get_yaxis().tick_left() plt.xlim(-25,0) plt.hist(big_dataset.loudness.dropna(), color="#006400", bins=50,alpha=0.75) plt.xlabel("Loudness (dB)", fontsize=14) plt.ylabel('Count', fontsize=14) axthree=plt.subplot(5,1,3) axthree.spines["top"].set_visible(False) axthree.spines["right"].set_visible(False) axthree.set_facecolor('white') axthree.get_xaxis().tick_bottom() axthree.get_yaxis().tick_left() plt.hist(big_dataset.speechiness.dropna(), color="#a52a2a", bins=50,alpha=0.75) plt.xlim(0,1) plt.xlabel("Speechiness", fontsize=14) plt.ylabel('Count', fontsize=14) axfour=plt.subplot(5,1,4) axfour.spines["top"].set_visible(False) axfour.spines["right"].set_visible(False) axfour.set_facecolor('white') axfour.get_xaxis().tick_bottom() axfour.get_yaxis().tick_left() plt.hist(big_dataset.tempo.dropna(), color="#ff8c00",bins=50,alpha=0.75) plt.xlim(40,225) plt.xlabel("Tempo (BPM)", fontsize=14) plt.ylabel('Count', fontsize=14) axfive=plt.subplot(5,1,5) axfive.spines["top"].set_visible(False) axfive.spines["right"].set_visible(False) axfive.set_facecolor('white') axfive.get_xaxis().tick_bottom() axfive.get_yaxis().tick_left() plt.hist(big_dataset.valence.dropna(), color="#8b4513",bins=50,alpha=0.75) plt.xlabel("Valence", fontsize=14) plt.ylabel('Count', fontsize=14) plt.xlim(0,1) plt.tight_layout() plt.savefig("metadata_histograms_2.png", bbox_inches="tight"); plt.show() # - # ** Things of Note:** # # Acousticness, instrumentalness and speechiness all seem to be heavily skewed towards zero. This is because songs on the billboard chart usually feature electric instruments, have lyrics and are composed of very few spoken words parts respectively. On the other hand, duration, liveness,loudness and especially danceability, energy and tempo seem to be normally distributed. It would make sense that the latter features would be closely related since songs with more energy and a faster tempo are easier to dance to up to a certain point. # ## How do the audio metadata features change over time? # # We subset the year data set so that we only consider the first week that the song entered the chart. # + # Group songs by year year_data_sub=year_data[year_data.lastPos==0] audio_means=year_data_sub.groupby(["year"]).mean().reset_index() fig = plt.figure(figsize=(10, 14)) axone=plt.subplot(5,1,1) axone.spines["top"].set_visible(False) axone.spines["right"].set_visible(False) axone.set_facecolor('white') axone.get_xaxis().tick_bottom() axone.get_yaxis().tick_left() plt.plot(audio_means.year, audio_means.acousticness, color="#3F5D7D", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Acousticness', fontsize=12) plt.title('Average Track Acousticness over Time', fontsize=14) plt.xlim([1958,2016]) axtwo=plt.subplot(5,1,2) axtwo.spines["top"].set_visible(False) axtwo.spines["right"].set_visible(False) axtwo.set_facecolor('white') axtwo.get_xaxis().tick_bottom() axtwo.get_yaxis().tick_left() plt.plot(audio_means.year,audio_means.danceability, color="green", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Danceability', fontsize=12) plt.title('Average Track Danceability over Time', fontsize=14) plt.xlim([1958,2016]) axthree=plt.subplot(5,1,3) axthree.spines["top"].set_visible(False) axthree.spines["right"].set_visible(False) axthree.set_facecolor('white') axthree.get_xaxis().tick_bottom() axthree.get_yaxis().tick_left() plt.plot(audio_means.year, audio_means.duration_ms/1000, color="red", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Duration (s)', fontsize=12) plt.title('Average Track Duration (s) over Time', fontsize=14) plt.xlim([1958,2016]) axfour=plt.subplot(5,1,4) axfour.spines["top"].set_visible(False) axfour.spines["right"].set_visible(False) axfour.set_facecolor('white') axfour.get_xaxis().tick_bottom() axfour.get_yaxis().tick_left() plt.plot(audio_means.year, audio_means.energy, color="purple", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Energy', fontsize=12) plt.title('Average Track Energy over Time', fontsize=14) plt.xlim([1958,2016]) axfive = plt.subplot(5,1,5) axfive.spines["top"].set_visible(False) axfive.spines["right"].set_visible(False) axfive.set_facecolor('white') axfive.get_xaxis().tick_bottom() axfive.get_yaxis().tick_left() plt.plot(audio_means.year,audio_means.instrumentalness, color="brown", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Instrumentalness', fontsize=12) plt.title('Average Track Instrumentalness over Time', fontsize=14) plt.xlim([1958,2016]) plt.tight_layout() plt.show() # + fig = plt.figure(figsize=(10, 14)) axone=plt.subplot(5,1,1) axone.spines["top"].set_visible(False) axone.spines["right"].set_visible(False) axone.set_facecolor('white') axone.get_xaxis().tick_bottom() axone.get_yaxis().tick_left() plt.plot(audio_means.year, audio_means.liveness, color="#3F5D7D", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Liveness', fontsize=12) plt.title('Average Track Liveness over Time', fontsize=14) plt.xlim([1958,2016]) axtwo=plt.subplot(5,1,2) axtwo.spines["top"].set_visible(False) axtwo.spines["right"].set_visible(False) axtwo.set_facecolor('white') axtwo.get_xaxis().tick_bottom() axtwo.get_yaxis().tick_left() plt.plot(audio_means.year,audio_means.loudness, color="green", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Loudness (dB)', fontsize=12) plt.title('Average Track Loudness (dB) over Time', fontsize=14) plt.xlim([1958,2016]) axthree=plt.subplot(5,1,3) axthree.spines["top"].set_visible(False) axthree.spines["right"].set_visible(False) axthree.set_facecolor('white') axthree.get_xaxis().tick_bottom() axthree.get_yaxis().tick_left() plt.plot(audio_means.year, audio_means.speechiness, color="red", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Speechiness', fontsize=12) plt.title('Average Track Speechiness over Time', fontsize=14) plt.xlim([1958,2016]) axfour=plt.subplot(5,1,4) axfour.spines["top"].set_visible(False) axfour.spines["right"].set_visible(False) axfour.set_facecolor('white') axfour.get_xaxis().tick_bottom() axfour.get_yaxis().tick_left() plt.plot(audio_means.year, audio_means.tempo, color="purple", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Tempo (BPM)', fontsize=12) plt.title('Average Track Tempo (BPM) over Time', fontsize=14) plt.xlim([1958,2016]) axfive = plt.subplot(5,1,5) axfive.spines["top"].set_visible(False) axfive.spines["right"].set_visible(False) axfive.set_facecolor('white') axfive.get_xaxis().tick_bottom() axfive.get_yaxis().tick_left() plt.plot(audio_means.year,audio_means.valence, color="brown", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Valence', fontsize=12) plt.title('Average Track Valence over Time', fontsize=14) plt.xlim([1958,2016]) plt.tight_layout() plt.show() # - # ** Things of Note:** # # _Acousticness_: Most songs had a higher acousticness score in the 60's possibly due to the popular of rockabilly and rock and roll which steadily declined through the 70's and 80's as more electronic instruments were added. # # _Danceability_: Steadily increased through the 70's possibly due to the popularity of disco and continued increasing through the 80's and 90's. # # _Duration_: The average length of a song was around 2 and half minutes in the 1960's possibly tied to 7" EP format that singles were released on and increased up to a length around 4 and a half minutes through the 70's until the 80's possibly as a result of the popularity of more complicated rock songs. # # _Energy_ and _Instrumentalness_: Songs have become more energetic over time, possibly related to becoming more danceable and less instrumental possibly related to the increase in speechiness. # # _Loudness_: We see a sharp increase in the 90's of Loudness which can be attributed to the popularity of the CD format and producers making songs louder so that tracks would stand out more on the radio. # # _Speechiness_ and _Valence_: Speechiness increased a lot during the 90's as rap became more popular while _Valence_ decreased possibly as a result of the growing popularity of grunge and continued through the 2000's. # ## What is the most popular key? # First group data by artist and title first_group_key=year_data.groupby(["artist","title","key","mode"]).count().reset_index() # Group data by key and mode key_data=first_group_key.groupby(["key","mode"]).count().reset_index() # Rename keys according to letters and major or minor key_name_list = pd.Series(["Cm","C","C♯m","C♯","Dm","D","D♯m","D♯","Em","E","Fm","F","F♯m","F♯","Gm","G","G♯m","G♯","Am","A",'A♯m',"A♯","Bm","B"]) key_name_list=key_name_list.rename("key_name") # Join the key list to the grouped key data key_data=pd.concat([key_data,key_name_list],axis=1) key_data=key_data.sort_values(by="year",ascending=False) key_list=key_data.key_name # Plot key frequency histogram y_pos = np.arange(len(key_list)) fig,axone=plt.subplots(figsize=(10, 8)) axone.spines["top"].set_visible(False) axone.spines["right"].set_visible(False) axone.set_facecolor('white') axone.get_xaxis().tick_bottom() axone.get_yaxis().tick_left() plt.bar(y_pos, key_data.year, width=0.5,color="#AB1E00") plt.xticks(y_pos,key_data.key_name) plt.xlabel("Key", fontsize=12) plt.ylabel('Frequency', fontsize=12) plt.title('Key Frequency',fontsize=14) plt.tight_layout() plt.show() # ** We see that most popular songs are in the key of C major closely followed by G major. ** # This is likely a result of the fact that these keys are easy to play on both piano and guitar. # In fact many songs are not only in the same key, but also the same chord progression as well. See this song https://www.youtube.com/watch?v=oOlDewpCfZQ by the Axis of Awesome for reference. # # It's also clear that major keys are much more popular than songs in minor keys. # # # **Note** that this differs slightly from the analysis of all songs on Spotify which found that the most popular key is G major. # # # https://insights.spotify.com/us/2015/05/06/most-popular-keys-on-spotify/ # ## What is the most common time signature? # First group data by artist and title first_group_time=year_data.groupby(["artist","title", "time_signature"]).count().reset_index() # group data by time signature time_data=first_group_time.groupby(["time_signature"]).count().reset_index().sort_values(by="year",ascending=False) time_data["percent"]=time_data.year/len(first_group_time[first_group_time.time_signature.notnull()]) times = time_data.percent*100 colors = ['#AB1E00','b' , 'y', 'g'] labels_time = ["4/4", "3/4", "5/4","2/4"] fig, ax = plt.subplots(figsize=(8, 8)) # make the plot square pie = ax.pie(times, colors=colors, labels=labels_time) plt.legend(loc = 'best', labels=['%s, %1.1f %%' % (l, s) for l, s in zip(labels_time, times)]) plt.title('Time Signature',fontsize=14) plt.tight_layout() plt.show() # ** Here we see that the vast majority of songs on the Billboard Hot 100 are written in 4/4 Time. ** # ## Has key popularity changed over time? #Create a pivot table to apply KMeans clustering key_names = key_data[["key","mode","key_name"]] key_merged = pd.merge(year_data,key_names, how="outer", left_on=["key","mode"],right_on=["key","mode"]).dropna() key_year_new=key_merged.groupby(["artist","title","year","key_name"]).count().reset_index() key_year_second_sub=key_year_new.groupby(["year","key_name"]).count().reset_index() key_subset=key_year_second_sub[["year","key_name","chartDate"]] key_pivot=key_subset.pivot_table(index=["year"],columns=["key_name"]) key_pivot=key_pivot.fillna(0).reset_index() x_cols = key_pivot[key_pivot.columns[1:]] Ks = range(2, 10) # Apply KMeans clustering for each k km = [KMeans(n_clusters=k).fit(x_cols) for k in Ks] # The sum-of-squares error in each cluster against KK score = [-km[i].fit(x_cols).score(x_cols) for i in range(len(km))] # Construct a plot showing SSSS for each KK and pick KK using this plot. For simplicity, test 2≤K≤102≤K≤10. plt.plot(Ks, score) plt.xlabel('k', size=12) plt.ylabel('SS', size=12) plt.title('SS vs k') plt.show() # ** We choose k=5 for our number of clusters. ** # We make a bar chart showing the number of points in each cluster for k-means under the best KK. # We use 4 for the best number of clusters. key_pivot['cluster'] = KMeans(n_clusters=5).fit_predict(x_cols) width = 1/2 y = pd.DataFrame(key_pivot.cluster.value_counts()).reset_index() y.columns=["cluster","counted"] x = np.arange(len(y)) plt.bar(x, y.counted, width, color="black") plt.xticks(x,y.cluster) plt.xlabel('Cluster', size=12) plt.ylabel('Cluster Size', size=12) plt.title('Number of years in each cluster',size=14) plt.show() # **It's clear that cluster four represents the years 1958 and 2017 since there is less data for those years. What can we say about the other four clusters? ** key_cluster_zero=key_pivot[key_pivot.cluster==0] key_cluster_one=key_pivot[key_pivot.cluster==1] key_cluster_two=key_pivot[key_pivot.cluster==2] key_cluster_three=key_pivot[key_pivot.cluster==3] print("Cluster zero:", key_cluster_zero.year.unique()) print("Cluster one:", key_cluster_one.year.unique()) print("Cluster two:", key_cluster_two.year.unique()) print("Cluster three:", key_cluster_three.year.unique()) # ** We have that cluster zero represents the earliest years on the charts, followed by cluster three, cluster one represents the 90's and early 2000's and cluster two represents every year since 2005. ** # + cluster_zero_sum=key_cluster_zero.sum().reset_index() cluster_zero_sum.columns=["chartDate", "key_name","counted"] cluster_one_sum=key_cluster_one.sum().reset_index() cluster_one_sum.columns=["chartDate","key_name","counted"] cluster_two_sum=key_cluster_two.sum().reset_index() cluster_two_sum.columns=["chartDate", "key_name","counted"] cluster_three_sum=key_cluster_three.sum().reset_index() cluster_three_sum.columns=["chartDate", "key_name","counted"] fig = plt.figure(figsize=(10, 14)) key_zero_sorted=cluster_zero_sum.sort_values(by="counted",ascending=False).reset_index(drop=True) key_zero_sorted=key_zero_sorted[1:25] key_list_zero_sorted=key_zero_sorted.key_name y_pos = np.arange(len(key_list_zero_sorted)) axzero=plt.subplot(4,1,1) axzero.spines["top"].set_visible(False) axzero.spines["right"].set_visible(False) axzero.set_facecolor('white') axzero.get_xaxis().tick_bottom() axzero.get_yaxis().tick_left() plt.bar(y_pos, key_zero_sorted.counted, width=0.5,color="red") plt.xticks(y_pos,key_zero_sorted.key_name) plt.title("Cluster Zero Key Frequency") plt.xlabel("Key", fontsize=12) plt.ylabel('Frequency', fontsize=12) axone=plt.subplot(4,1,3) key_one_sorted=cluster_one_sum.sort_values(by="counted",ascending=False).reset_index(drop=True) key_one_sorted=key_one_sorted[1:25] key_list_one_sorted=key_one_sorted.key_name axone.spines["top"].set_visible(False) axone.spines["right"].set_visible(False) axone.set_facecolor('white') axone.get_xaxis().tick_bottom() axone.get_yaxis().tick_left() plt.bar(y_pos, key_one_sorted.counted, width=0.5,color="blue") plt.xticks(y_pos,key_one_sorted.key_name) plt.title("Cluster One Key Frequency") plt.xlabel("Key", fontsize=12) plt.ylabel('Frequency', fontsize=12) axtwo=plt.subplot(4,1,4) key_two_sorted=cluster_two_sum.sort_values(by="counted",ascending=False).reset_index(drop=True) key_two_sorted=key_two_sorted[1:25] key_list_two_sorted=key_two_sorted.key_name axtwo.spines["top"].set_visible(False) axtwo.spines["right"].set_visible(False) axtwo.set_facecolor('white') axtwo.get_xaxis().tick_bottom() axtwo.get_yaxis().tick_left() plt.bar(y_pos, key_two_sorted.counted, width=0.5,color="green") plt.xticks(y_pos,key_two_sorted.key_name) plt.title("Cluster Two Key Frequency") plt.xlabel("Key", fontsize=12) plt.ylabel('Frequency', fontsize=12) axtwo=plt.subplot(4,1,2) key_three_sorted=cluster_three_sum.sort_values(by="counted",ascending=False).reset_index(drop=True) key_three_sorted=key_three_sorted[1:25] key_list_three_sorted=key_three_sorted.key_name axtwo.spines["top"].set_visible(False) axtwo.spines["right"].set_visible(False) axtwo.set_facecolor('white') axtwo.get_xaxis().tick_bottom() axtwo.get_yaxis().tick_left() plt.bar(y_pos, key_three_sorted.counted, width=0.5,color="black") plt.xticks(y_pos,key_three_sorted.key_name) plt.title("Cluster Three Key Frequency") plt.xlabel("Key", fontsize=12) plt.ylabel('Frequency', fontsize=12) plt.tight_layout() plt.show() # - # ** One interesting discovery is that the key C♯ has become much more popular over time and is now tied with G for the most popular key whereas D♯ has become the least popular major key and is now less popular than almost all minor keys. ** # # ** Speculation**: Now that rock music is no longer the dominant genre it once was, there are less songs being written with piano and guitar in mind which may explain the decrease in the popularity of songs in C and G major. # # ** Furthermore **: It would be very interesting to obtain reliable genre data for the Billboard Hot 100 songs to see if this can explain the change in key popularity. # # 7. Exploratory Data Analysis on the Lyrics # ## How many tracks have lyrics? From which sources? source_data=pd.DataFrame() lyrics_count = len(lyrics_sub[lyrics_sub.source ==0])+len(lyrics_sub[lyrics_sub.source==1]) source_data = source_data.append({"source":"metrolyrics.com","lyrics count":lyrics_count},ignore_index=True) lyrics_sources = ["songlyrics.com","lyricsmode.com","azlyrics.com","musixmatch.com"] for u in range(0,len(lyrics_sources)): lyrics_count = len(lyrics_sub[lyrics_sub.source ==u+2]) source_data = source_data.append({"source":lyrics_sources[u],"lyrics count":lyrics_count},ignore_index=True) no_lyrics= lyrics_sub[lyrics_sub.source.isnull()] source_data=source_data.append({"source":"unavailable","lyrics count":len(no_lyrics)},ignore_index=True) print("We have lyrics data for",len(lyrics_sub[lyrics_sub.lyrics.notnull()])/len(lyrics_sub)*100,"% of the tracks in the Billboard Hot 100.") source_data # ** Note: The reason azlyrics.com and musixmatch.com have such low lyrics counts besides being the 4th and 5th choices is because our IP was blocked after only a few attempts. ** # ## What are the most common words in the lyrics of each year? # We import a list of small stopwords from the NLTK package that we do not consider when counting the most common words. from nltk.corpus import stopwords stop_words_set=set(stopwords.words('english')) # The stopwords list is missing many contractions so we import these as a list as well. contract_file = open("contractions.txt") contraction_set = set(contract_file.read().split(',')) # Join the two sets together stop_plus_cont=stop_words_set.union(contraction_set) #The following is our regex for splitting the words in the lyrics into tokens. pattern = r"""(?x) # set flag to allow verbose regexps (?:[A-Z]\.)+ # abbreviations, e.g. U.S.A. |\d+(?:\.\d+)?%? # numbers, incl. currency and percentages |\w+(?:[-']\w+)* # words w/ optional internal hyphens/apostrophe |(?:[+/\-@&*]) # special characters with meanings """ import nltk from nltk import word_tokenize lyrics_list=billboard_lyrics[billboard_lyrics.lyrics.notnull()].reset_index() common_words = pd.DataFrame() lower_limit =0 upper_limit =len(lyrics_list) for row in range(lower_limit,upper_limit): row_string=lyrics_list.get_value(row,"lyrics").lower() row_tokens=nltk.regexp_tokenize(row_string, pattern) # We remove words that are only three letters or less and stop words row_tokens_sub=[w for w in row_tokens if len(w)>3 and (w not in stop_plus_cont)] row_text = nltk.Text(row_tokens) word_counts = Counter(row_tokens_sub) row_freq_dist=nltk.FreqDist(row_text) # We calculate lyric diversity as the number of unique words in the lyrics divided by the total lyrics. row_diversity= len(set(row_text))/len(row_text) add_frame = {"artist":lyrics_list.get_value(row,"artist"), "track":lyrics_list.get_value(row,"track"), "lyrics":lyrics_list.get_value(row,"lyrics"),"common words":word_counts.most_common(),"FreqDist":row_freq_dist,"Lyrical Diversity":row_diversity} common_words = common_words.append(add_frame,ignore_index=True) # We group the year data by title, artist lyrics and year chart_year=year_data.groupby(["title","artist","lyrics","year"]).count().reset_index() # We merge the year data with the common words dataset chart_sub=pd.merge(common_words,chart_year,left_on=["track","artist"],right_on=["title","artist"],how="outer") chart_sub=chart_sub[["artist","common words","title","year","Lyrical Diversity"]].sort_values(by="year") # ## Lyrical Diversity over time # # We define the lyrical diversity to be the number of unique words in the lyrics divided by the total number of the words. We first remove songs with a lyrical diversity of 1 which correspond to songs where the lyrics were either not known or the track is instrumental. actual_lyrics=chart_sub[chart_sub["Lyrical Diversity"]!=1] # Find the mean lyrical diversity for each year. div_means=actual_lyrics.groupby(["year"]).mean().reset_index() fig = plt.figure(figsize=(8, 6)) axone=plt.subplot(1,1,1) axone.spines["top"].set_visible(False) axone.spines["right"].set_visible(False) axone.set_facecolor('white') axone.get_xaxis().tick_bottom() axone.get_yaxis().tick_left() plt.plot(div_means.year, div_means["Lyrical Diversity"], color="red", linewidth=0.6) plt.xlabel('Date', fontsize=12) plt.ylabel('Average Lyrical Diversity', fontsize=12) plt.title('Average Lyrical Diversity over Time', fontsize=14) plt.xlim([1958,2016]) plt.show() # Here we see a remarkable steep decline in the average lyrical diversity of songs on the Billboard Hot 100. Songs are becoming much simpler with more repetition. # # 8. Predicting Song Popularity from Audio Metadata # # For the purposes of this project we say that a song is popular if it reaches the top ten. This is given by songs with 'peakPos'<=10. # # **Note**: We only count each song once since if we allow for songs to be counted with multiplicity then it might be possible that every song in the test data was actually already present in the training data, which could give a misleading high accuracy score. # We import the dataset that was cleaned and wrangled in the previous steps. all_data=pd.read_csv("all_data_spotify_lyrics.csv",encoding='latin1') # We subset the dataset on the useful features and rename the 'Lyrical Diversity' variable. all_data_sub=all_data[["chartDate", "title","artist","peakPos","weeks","rank", 'acousticness','danceability','duration_ms','energy','instrumentalness','key','liveness','loudness','mode','speechiness','tempo','time_signature','valence', 'year' ,'Lyrical Diversity']] all_data_sub = all_data_sub.rename(columns={'Lyrical Diversity': 'lyrical_diversity'}) # ## Are the audio metadata features different for more popular songs? top_ten_songs=all_data_sub[all_data_sub.peakPos<11] audio_means_ten=top_ten_songs.groupby("year").mean().reset_index() bottom_ninety=all_data_sub[all_data_sub.peakPos>10] audio_means_bottom=bottom_ninety.groupby("year").mean().reset_index() no_year_cols=["title","artist","peakPos",'acousticness','danceability','duration_ms','energy','instrumentalness','key','liveness','loudness','mode','speechiness','tempo','time_signature','valence','lyrical_diversity'] group_columns = ["title","artist","peakPos",'acousticness','danceability','duration_ms','energy','instrumentalness','key','liveness','loudness','mode','speechiness','tempo','time_signature','valence','lyrical_diversity','year'] # + fig = plt.figure(figsize=(10, 14)) axone=plt.subplot(5,1,1) axone.spines["top"].set_visible(False) axone.spines["right"].set_visible(False) axone.set_facecolor('white') axone.get_xaxis().tick_bottom() axone.get_yaxis().tick_left() plt.plot(audio_means_ten.year, audio_means_ten.acousticness, color="blue", linewidth=1,label="Top Ten") plt.plot(audio_means_bottom.year, audio_means_bottom.acousticness, color="black", linewidth=1,label="Bottom Ninety") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Acousticness', fontsize=12) plt.title('Average Track Acousticness over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) axtwo=plt.subplot(5,1,2) axtwo.spines["top"].set_visible(False) axtwo.spines["right"].set_visible(False) axtwo.set_facecolor('white') axtwo.get_xaxis().tick_bottom() axtwo.get_yaxis().tick_left() plt.plot(audio_means_ten.year,audio_means_ten.danceability, color="green", linewidth=1,label="Top Ten") plt.plot(audio_means_bottom.year,audio_means_bottom.danceability, color="black", linewidth=1,label="Bottom Ninety") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Danceability', fontsize=12) plt.title('Average Track Danceability over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) axthree=plt.subplot(5,1,3) axthree.spines["top"].set_visible(False) axthree.spines["right"].set_visible(False) axthree.set_facecolor('white') axthree.get_xaxis().tick_bottom() axthree.get_yaxis().tick_left() plt.plot(audio_means_ten.year, audio_means_ten.duration_ms/1000, color="red", linewidth=1,label="Top Ten") plt.plot(audio_means_bottom.year, audio_means_bottom.duration_ms/1000, color="black", linewidth=1,label="Bottom Ninety") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Duration (s)', fontsize=12) plt.title('Average Track Duration (s) over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) axfour=plt.subplot(5,1,4) axfour.spines["top"].set_visible(False) axfour.spines["right"].set_visible(False) axfour.set_facecolor('white') axfour.get_xaxis().tick_bottom() axfour.get_yaxis().tick_left() plt.plot(audio_means_ten.year, audio_means_ten.energy, color="purple", linewidth=1,label="Top Ten") plt.plot(audio_means_bottom.year, audio_means_bottom.energy, color="black", linewidth=1,label="Bottom Ninety") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Energy', fontsize=12) plt.title('Average Track Energy over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) axfive = plt.subplot(5,1,5) axfive.spines["top"].set_visible(False) axfive.spines["right"].set_visible(False) axfive.set_facecolor('white') axfive.get_xaxis().tick_bottom() axfive.get_yaxis().tick_left() plt.plot(audio_means_ten.year,audio_means_ten.instrumentalness, color="brown", linewidth=1,label="Top Ten") plt.plot(audio_means_bottom.year,audio_means_bottom.instrumentalness, color="black", linewidth=1,label="Bottom Ninety") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Instrumentalness', fontsize=12) plt.title('Average Track Instrumentalness over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) plt.tight_layout() plt.show() # + fig = plt.figure(figsize=(10, 15)) axone=plt.subplot(6,1,1) axone.spines["top"].set_visible(False) axone.spines["right"].set_visible(False) axone.set_facecolor('white') axone.get_xaxis().tick_bottom() axone.get_yaxis().tick_left() plt.plot(audio_means_bottom.year, audio_means_bottom.liveness, color="black", linewidth=1,label="Bottom Ninety") plt.plot(audio_means_ten.year, audio_means_ten.liveness, color="blue", linewidth=1,label="Top Ten") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Liveness', fontsize=12) plt.title('Average Track Liveness over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) axtwo=plt.subplot(6,1,2) axtwo.spines["top"].set_visible(False) axtwo.spines["right"].set_visible(False) axtwo.set_facecolor('white') axtwo.get_xaxis().tick_bottom() axtwo.get_yaxis().tick_left() plt.plot(audio_means_bottom.year,audio_means_bottom.loudness, color="black", linewidth=1,label="Bottom Ninety") plt.plot(audio_means_ten.year,audio_means_ten.loudness, color="green", linewidth=1,label="Top Ten") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Loudness (dB)', fontsize=12) plt.title('Average Track Loudness (dB) over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) axthree=plt.subplot(6,1,3) axthree.spines["top"].set_visible(False) axthree.spines["right"].set_visible(False) axthree.set_facecolor('white') axthree.get_xaxis().tick_bottom() axthree.get_yaxis().tick_left() plt.plot(audio_means_bottom.year, audio_means_bottom.speechiness, color="black", linewidth=1,label="Bottom Ninety") plt.plot(audio_means_ten.year, audio_means_ten.speechiness, color="red", linewidth=1,label="Top Ten") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Speechiness', fontsize=12) plt.title('Average Track Speechiness over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) axfour=plt.subplot(6,1,4) axfour.spines["top"].set_visible(False) axfour.spines["right"].set_visible(False) axfour.set_facecolor('white') axfour.get_xaxis().tick_bottom() axfour.get_yaxis().tick_left() plt.plot(audio_means_bottom.year, audio_means_bottom.tempo, color="black", linewidth=1,label="Bottom Ninety") plt.plot(audio_means_ten.year, audio_means_ten.tempo, color="purple", linewidth=1,label="Top Ten") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Tempo (BPM)', fontsize=12) plt.title('Average Track Tempo (BPM) over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) axfive = plt.subplot(6,1,5) axfive.spines["top"].set_visible(False) axfive.spines["right"].set_visible(False) axfive.set_facecolor('white') axfive.get_xaxis().tick_bottom() axfive.get_yaxis().tick_left() plt.plot(audio_means_bottom.year,audio_means_bottom.valence, color="black", linewidth=1,label="Bottom Ninety") plt.plot(audio_means_ten.year,audio_means_ten.valence, color="brown", linewidth=1,label="Top Ten") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Valence', fontsize=12) plt.title('Average Track Valence over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) axsix = plt.subplot(6,1,6) axsix.spines["top"].set_visible(False) axsix.spines["right"].set_visible(False) axsix.set_facecolor('white') axsix.get_xaxis().tick_bottom() axsix.get_yaxis().tick_left() plt.plot(audio_means_bottom.year,audio_means_bottom.lyrical_diversity, color="black", linewidth=1,label="Bottom Ninety") plt.plot(audio_means_ten.year,audio_means_ten.lyrical_diversity, color="darkblue", linewidth=1,label="Top Ten") plt.xlabel('Date', fontsize=12) plt.ylabel('Average Lyrical Diversity', fontsize=12) plt.title('Average Track Lyrical Diversity over Time', fontsize=14) plt.legend() plt.xlim([1958,2016]) plt.tight_layout() plt.show() # - # ** Interesting observations: ** After 2000 the danceability of tracks in the top ten have been much higher than the bottom ninety. On the other hand the liveness of tracks in the top ten have almost always been less on average than the bottom ninety. The speechiness of top ten tracks was much less than the bottom ninety in the 90's, but this switched in 2001 and for the duration of the 2000's it was much higher. A similar trend is found with valence. Top ten songs had a lower valence from the mid 80's until the late 90's (which could be somewhat related to the popularity of grunge?). Then in the 2000's the top ten tracks began to have higher valence than the bottom ninety. # # ** Note: ** Lyrical diversity may be a very good indicator of whether a song becomes a top ten hit since top ten hits always have more repetition and after the late 90's the gap between the lyrical diversity of top ten tracks and bottom ninety has grown wider. # We now separate the songs into two categories and use a binary classifier in the 'top_ten_hit' column. We keep only the songs for which we have Spotify audio metadata. We also drop the cases for which we have lyrics data, but not Spotify data. # Separate the data into songs in the top ten and songs in the bottom ninety. top_ten_weeks=all_data_sub[all_data_sub['rank']<=10] bottom_ninety_weeks=all_data_sub[all_data_sub['rank']>10] top_ten_group=top_ten_weeks.groupby(no_year_cols).size().reset_index() bottom_ninety_group=bottom_ninety_weeks.groupby(no_year_cols).size().reset_index() # Rename column top_ten_group.rename(columns={0:'num_weeks_top_ten'}, inplace=True) bottom_ninety_group.rename(columns={0:'num_weeks_bottom'}, inplace=True) # Join the top ten and bottom ninety together. ten_ninety_join=pd.merge(top_ten_group,bottom_ninety_group, how="outer",left_on=no_year_cols,right_on=no_year_cols) # Replace NaN's in the num_weeks_top_ten column with zeros. ten_ninety_join=ten_ninety_join.fillna(0) # Create a dataframe of all the unique songs with the year column. songs_with_year=all_data_sub.groupby(group_columns).size().reset_index() # Remove the duplicate tracks for songs that appear in the charts for two separate year by choosing the first year. top_ten_yearly=songs_with_year.groupby(no_year_cols).min().reset_index() # Merge the ten_ninety_join and top_ten_yearly datasets together. hits_with_year=pd.merge(ten_ninety_join,top_ten_yearly, how="outer",left_on=no_year_cols,right_on=no_year_cols) # Create the top_ten_hit predictor column. top_ten_prediction=hits_with_year[['peakPos', 'acousticness','danceability','duration_ms','energy','instrumentalness','key','liveness','loudness','mode','speechiness','tempo','time_signature','valence','year','num_weeks_top_ten',"num_weeks_bottom","lyrical_diversity"]] top_ten_prediction['top_ten_hit']= (top_ten_prediction.num_weeks_top_ten >0).astype(int) pd.options.display.float_format = '{:20,.5f}'.format top_ten_prediction.groupby(['top_ten_hit']).mean().reset_index() # From this initial investigation it seems like danceability, tempo and lyrical diversity may be important features to distinguish between cases. Also, from our previous observations of the averages over time we know that for certain years the averages between top ten hits and songs in the bottom ninety can differ significantly. # ## Data Visualization # The key variable is an integer between zero and eleven and time signature is an integer between 2 and 5. These are not on a numerical scale so we introduce dummy indicator variables to distinguish between the cases when using regression. # We can drop the 'peakPos' column when visualizing the data. top_ten_prediction.drop(['peakPos'],axis=1,inplace=True) # Change key and time entries to integers top_ten_prediction.key=top_ten_prediction.key.astype(int) top_ten_prediction.time_signature=top_ten_prediction.time_signature.astype(int) # create dummy variables using get_dummies, then exclude the first dummy column key_dummies = pd.get_dummies(top_ten_prediction.key, prefix='key').iloc[:, 1:] time_dummies = pd.get_dummies(top_ten_prediction.time_signature, prefix='time').iloc[:, 1:] # concatenate the dummy variable columns onto the original DataFrame (axis=0 means rows, axis=1 means columns) top_ten_no_na_dum = pd.concat([top_ten_prediction, key_dummies,time_dummies], axis=1) # ## Is there any correlation between the audio metadata features? audio_data=top_ten_prediction[["acousticness", "danceability", "duration_ms", "energy" ,"instrumentalness", "liveness", "loudness", "speechiness", "tempo","valence","lyrical_diversity"]].dropna() scatter_matrix(audio_data,alpha=0.2, figsize=(12, 12), diagonal='kde') plt.show() # ** Things of note: ** # We see that instrumentalness and speechiness appear to be more of a binary relationship and there exists a clear positive relationship between loudness and energy. On the other hand, the relationship between danceability and energy does not to be as strong as we expeccted. # We investigate the relationship between danceability, energy, loudness and tempo print("Loudness and Energy:") print(stats.linregress(audio_data.loudness,audio_data.energy)) print("") print("Tempo and Energy:") print(stats.linregress(audio_data.tempo,audio_data.energy)) print("") print("Danceability and Energy:") print(stats.linregress(audio_data.danceability,audio_data.energy)) print("") print("Tempo and Danceability:") print(stats.linregress(audio_data.tempo,audio_data.danceability)) # ** For loudness and energy we get an r^2 value of 0.7083 which indicates that there is indeed a strong positive relationship. For the others the correlation is not very strong. ** # We perform a box and whiskers plot of the features top_ten_no_na_dum.plot(kind='box', subplots=True,layout=(6,6), figsize=(12, 12), sharex=False, sharey=False) plt.show() # From these observations we see there are a few outliers for the duration and speechiness variables. We now investigate for other outliers. # ## Outlier Detection # # From the distributions above we see that 'lyrical_diversity', 'danceability', 'duration_ms', 'energy', 'loudness', and 'tempo' have distributions that appear somewhat normal so we look for outliers from these features by checking for songs with observations that are more than two standard deviations away from the means. top_ten_outliers=top_ten_prediction[['danceability','duration_ms','energy','loudness','tempo', 'lyrical_diversity']] outliers=pd.DataFrame() for column in top_ten_outliers.columns: mean = np.mean(top_ten_outliers[column], axis=0) sd = np.std(top_ten_outliers[column], axis=0) column_outliers_small=top_ten_outliers[top_ten_outliers[column]< mean-2*sd] column_outliers_large=top_ten_outliers[top_ten_outliers[column]> mean+2*sd] outliers=pd.concat([outliers,column_outliers_small],axis=0) outliers=pd.concat([outliers,column_outliers_large],axis=0) print("Number of outliers: ", len(outliers)) print("Percentage of total dataset: ", len(outliers)/len(top_ten_outliers)*100) # We obtain 4374 outliers. Since these make up a rather large percentage of the overall size of the dataset, we do not remove these songs in our analysis. corr = top_ten_no_na_dum.corr() sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values) plt.show() # From this we see a high positive correlation between energy and loudness and high negative correlation between acousticness and both loudness and energy. # # We drop time_signature and key since these are explained by the dummy variables. top_ten_no_na_dum.drop(['key','time_signature'],axis=1,inplace=True) linear_feature_cols = ['acousticness','danceability','duration_ms','energy','instrumentalness','liveness','loudness','mode','speechiness','tempo','valence','key_1','key_2','key_3','key_4','key_5','key_7','key_8','key_9','key_10','key_11','time_3','time_4','time_5','year','lyrical_diversity'] # ## VIF Variance Inflation Factor # # The Correlation matrix above is used to detect colinearity between two variables. However, multicolinearity is a measurement of correlation from three or more variables and can emerge even when isolated pairs of variables are not colinear. # # We use the Variance Inflation Factor (VIF) as a measure of colinearity among predictor variables within a multiple regression. It is given by the formula: $ VIF = 1 / (1 - R^2). $ # + from patsy import dmatrices import statsmodels.api as sm from statsmodels.stats.outliers_influence import variance_inflation_factor feature_cols_str = "+".join(linear_feature_cols) # get y and X dataframes based on this regression: y_vif, X_vif = dmatrices('top_ten_hit ~' + feature_cols_str, top_ten_no_na_dum, return_type='dataframe') # For each X, calculate VIF and save in dataframe vif = pd.DataFrame() vif["VIF Factor"] = [variance_inflation_factor(X_vif.values, i) for i in range(X_vif.shape[1])] vif["features"] = X_vif.columns vif # - # We see that there is some degree of multi-colinearity between time_3 and time_4. Over 90% of the songs were in 4/4 time so it will not be a very good feature to use for prediction so we remove it from our dataset. top_ten_no_na_dum.drop(['time_4'],axis=1,inplace=True) # # 9. Machine Learning Modeling # Machine Learning Imports from sklearn.model_selection import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve from sklearn.metrics import auc from sklearn.metrics import r2_score sum(top_ten_prediction.top_ten_hit)/len(top_ten_prediction)*100 # From this we see that 20.484% of the songs in our dataset are top ten hits. Thus, when modeling we can obtain a 79.516% accuracy score just by always predicting that the song is not a top ten hit. # # We have that 'top_ten_hit' is the target variable in our classification problem. To account for this imbalance in positive and negative samples we use oversampling to ensure that relative class frequencies are approximately preserved in each train and validation fold. # ## Classification Algorithm # We write the following classification algorithm that we can use with multiple machine learning models given a dataset, target variable and hyperparameters. We use SMOTE - Synthetic Minority Over-sampling Technique to account for our imbalanced dataset. def do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False,seed = 7): #To create the 'X' dataset we first standardize by the mean and standard deviation is standardize is set to True subdf=indf[featurenames] if standardize: subdfstd=(subdf - subdf.mean())/subdf.std() else: subdfstd=subdf #Create the 'X' and 'y' datasets X=subdfstd.values y=(indf[targetname].values==target1val)*1 # Split X and y into training and test data. Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size = 0.2, random_state=seed) clf = clf() # Now implement GridSearchCV for the cross validation step grid_clf = GridSearchCV(clf, parameters, cv = 5, scoring = 'roc_auc') # Use the classifier to fit the training data grid_clf.fit(Xtrain, ytrain) grid_est=grid_clf.best_estimator_ print("BEST PARAMS", grid_clf.best_params_) clf_model = grid_est.fit(Xtrain, ytrain) predicted=clf_model.predict_proba(Xtest)[:,1] fpr, tpr, threshhold = roc_curve(ytest, predicted) roc_auc = auc(fpr, tpr) plt.title('ROC Curve') plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() return clf_model # We use a 80-20 Train-Test split by default and do not standardize the audiodata metadata features # We set the seed to 7 to obtain consistent results #def do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False,seed = 7): # #To create the 'X' dataset we first standardize by the mean and standard deviation is standardize is set to True # subdf=indf[featurenames] # if standardize: # subdfstd=(subdf - subdf.mean())/subdf.std() # else: # subdfstd=subdf #Create the 'X' and 'y' datasets # X=subdfstd.values # y=(indf[targetname].values==target1val)*1 # Split X and y into training and test data. # Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size = 0.2, random_state=seed) # clf = clf() # Now implement GridSearchCV for the cross validation step # grid_clf = GridSearchCV(clf, parameters, cv = 5, scoring = 'roc_auc') # Use the classifier to fit the training data # grid_clf.fit(Xtrain, ytrain) # grid_est=grid_clf.best_estimator_ # print("BEST PARAMS", grid_clf.best_params_) # Use oversampling with SMOTE to account for imbalanced data # smote = SMOTE(random_state=seed) # smote_model = make_pipeline(smote, grid_est) # smote_model = smote_model.fit(Xtrain, ytrain) # preds = smote_model.predict_proba(Xtest)[:,1] # fpr, tpr, threshhold = roc_curve(ytest, preds) # training_accuracy = smote_model.score(Xtrain, ytrain) # test_accuracy = smote_model.score(Xtest, ytest) # print("Accuracy on training data: {:0.5f}".format(training_accuracy)) # print("Accuracy on test data: {:0.5f}".format(test_accuracy)) #test_prediction=smote_model.predict(Xtest) # test_prediction=grid_est.predict(Xtest) # print(confusion_matrix(ytest,test_prediction)) # print(classification_report(ytest,test_prediction)) # roc_auc = auc(fpr, tpr) # plt.title('ROC Curve of SMOTE') # plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc) # plt.legend(loc = 'lower right') # plt.plot([0, 1], [0, 1],'r--') # plt.xlim([0, 1]) # plt.ylim([0, 1]) # plt.ylabel('True Positive Rate') # plt.xlabel('False Positive Rate') # plt.show() # return smote_model,grid_est # Since our question is primarily a binary classification problem, we begin with Logistic Regression. # ## Logistic Regression linear_feature_cols = ['acousticness','danceability','duration_ms','energy','instrumentalness','liveness','loudness','mode','speechiness','tempo','valence','key_1','key_2','key_3','key_4','key_5','key_7','key_8','key_9','key_10','key_11','time_3','time_5','year','lyrical_diversity'] from sklearn.linear_model import LogisticRegression params = {"C": [0.001, 0.1, 1, 10, 100],'class_weight':[{0:0.79516, 1:0.20484}]} clf_log= do_classify(LogisticRegression, params, top_ten_no_na_dum, linear_feature_cols, 'top_ten_hit',1) pd.options.display.float_format = '{:20,.5f}'.format pd.DataFrame({"feature":linear_feature_cols,"log_coefficient":clf_log.coef_[0]}) # From the ROC curve and coefficients we see that Logistic Regression does not produce a model that is much more accurate than the trivial model. Thus the features do not have much influence on the model. # ## Ridge Regression, Lasso Regression and Elastic Net Regression # Since we have a classification problem, and these models are linear in nature we do not implement them here. # ## Decision Tree # In a Decision Tree we have a number of parameters we can tune. # - Criterion measures the quality of the split by either entropy or the Gini impurity. Entropy chooses splits in the tree that result in the purest daughter nodes, whereas Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset. # - Max depth is the maximum depth of the tree or the max number of branches a node can be from the root. # - Max features is the maximum number of variables we may use in the tree. # - min_samples_leaf is the minimum of observations required to be at a leaf node. # - min_samples_split is the minimum number of samples required to split an internal node. from sklearn.tree import DecisionTreeClassifier from sklearn.tree import export_graphviz feature_cols = ['acousticness','danceability','duration_ms','energy','instrumentalness','key','liveness','loudness','mode','speechiness','tempo','time_signature','valence','year', 'lyrical_diversity'] params ={'criterion': ["gini","entropy"], 'max_depth': [3, 5,7], 'max_features': [3, 5,10], 'min_samples_leaf': [ 3,5,10], 'min_samples_split': [3,5,10], 'class_weight':[{0:0.79516, 1:0.20484}]} clf_dtree = do_classify(DecisionTreeClassifier,params, top_ten_prediction, feature_cols, 'top_ten_hit',1) from IPython.display import Image import pydotplus dot_data = export_graphviz(clf_dtree, out_file=None, feature_names=feature_cols, class_names='top_ten_hit', filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data) Image(graph.create_png()) # ** Here the Decision Tree model improves on the area under the curve to a value of 0.59 compared to the Logistic Regression model AUC of 0.55. We investigate further with a random forest model. ** # ## Random Forest # A random forest is an ensemble method that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control for over-fitting. from sklearn.ensemble import RandomForestClassifier params={'n_estimators':[500,1000],'max_depth':[5],'class_weight':[{0:0.79516, 1:0.20484}]} clf_rf= do_classify(RandomForestClassifier, params, top_ten_prediction, feature_cols, 'top_ten_hit',1) pd.DataFrame({"feature":feature_cols,"feature_importance":clf_rf.feature_importances_}) # The Random Forest Classification model results in an improved AUC of 0.62. The most important feature is lyrical diversity, followed by year and danceability. From this we see that lyrics can be a more important predictor of song popularity than we may have anticipated. # ## Support Vector Machine Classification # # Support vector machines are a set of supervised learning methods used for classification, regression and outliers detection. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. # # For the parameters: # - C is the penalty parameter of the error term. # - degree is the degree of the polynomial kernel function. # - gamma is the kernel coefficient and when it is ‘auto’ then 1/n_features will be used. # - cache_size specifies the size of the kernel cache (in MB). # - max_iter when set to -1 mean there is no limit on the number of iterations within the solver. from sklearn.svm import SVC params ={"C":[1],'cache_size':[200], 'probability':[True], 'class_weight':[{0:0.79516, 1:0.20484}]} clf_svc=do_classify(SVC, params, top_ten_prediction, feature_cols, 'top_ten_hit',1) # The Support Vector Machine Classifier only gives a AUC of 0.51 and one of the downfalls of SVC is that it does not tell us much about which features are most important in the model. # ## KNN # The k-Nearest Neighbor (KNN) algorithm is used for classification or regression. During the training phase the KNN algorithm finds the $k$ “nearest” points to a given point, and returns the class with the highest proportion. If $k = 1$, then one only looks for the closest point and returns its class. The optimal value for $k$ in KNN is usually between 3-10. In our case $k$ is given by the parameter 'n_neighbors'. from sklearn.neighbors import KNeighborsClassifier params ={'n_neighbors':[3,5,10]} clf_knn = do_classify(KNeighborsClassifier, params, top_ten_prediction, feature_cols, 'top_ten_hit',1) # K-Nearest Neighbors gives a slightly better model than guessing, but its area under the curve is only 0.52 so it does not offer much improvement. # ## Neural Network # # For a supervised nueral network model we use Multi-layer Perceptron. It is a supervised learning algorithm that learns a function $f(\cdot): R^m \rightarrow R^o$ by training on a dataset, where m is the number of dimensions for input and $o$ is the number of dimensions for output. Given a set of features $X = {x_1, x_2, ..., x_m}$ and a target $y$, it can learn a non-linear function approximator for either classification or regression. It is different from logistic regression, in that between the input and the output layer, there can be one or more non-linear layers, called hidden layers. # # Multi-layer Perceptron is sensitive to feature scaling, so we will scale the data. # # For the neural network we will only define the hidden_layer_sizes. For this parameter you pass in a tuple consisting of the number of neurons you want at each layer, where the nth entry in the tuple represents the number of neurons in the nth layer of the MLP model. There are many ways to choose these numbers, but for simplicity we will choose 3 layers with the same number of neurons as there are features in our dataset. feature_cols = ['acousticness','danceability','duration_ms','energy','instrumentalness','key','liveness','loudness','mode','speechiness','tempo','time_signature','valence','year', 'lyrical_diversity'] from sklearn.preprocessing import StandardScaler from sklearn.neural_network import MLPClassifier X=top_ten_prediction[feature_cols].values y=top_ten_prediction['top_ten_hit'].values Xtrain_nn, Xtest_nn, ytrain_nn, ytest_nn = train_test_split(X, y, test_size = 0.2, random_state=6) # For the neural network we add the extra scaling step: scaler = StandardScaler() # Fit only to the training data scaler.fit(Xtrain_nn) X_train_nn = scaler.transform(Xtrain_nn) X_test_nn = scaler.transform(Xtest_nn) #Use the classifier to fit the training data. We have 14 variables in our set mlp = MLPClassifier() parameters={'hidden_layer_sizes':[(10,10,10),(15,15,15),(20,20,20)]} #mlp.fit(X_train_nn,ytrain_nn) grid_clf = GridSearchCV(mlp, parameters, cv = 5, scoring = 'roc_auc') # Use the classifier to fit the training data grid_clf.fit(Xtrain_nn, ytrain_nn) grid_est=grid_clf.best_estimator_ print("BEST PARAMS", grid_clf.best_params_) mlp_model = grid_est.fit(Xtrain_nn, ytrain_nn) predicted=mlp_model.predict_proba(Xtest_nn)[:,1] fpr, tpr, threshhold = roc_curve(ytest_nn, predicted) roc_auc = auc(fpr, tpr) plt.title('ROC Curve') plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() # Neural Network does not give a very accurate model with an essentially trivial AUC of 0.5. # ## GBM Tree # # Gradient Boosting is a sequential technique which works on the principle of ensemble. It combines a set of weak learners and delivers improved prediction accuracy. # # - learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators. We want to choose a relatively high learning rate. # - min_samples_split is the minimum number of samples required to split an internal node and should be around ~0.5-1% of total values. Since this is imbalanced class problem, we’ll take a small value from the range. # - min_samples_leaf is the minimum number of samples required to be at a leaf node and is used to prevent overfitting. We choose a small value again because of imbalanced classes. # - max_depth limits the number of nodes in the tree. # - max_features is the number of features to consider when looking for the best split. A general rule of thumb is to start with square root. from sklearn.ensemble import GradientBoostingClassifier params ={'learning_rate':[0.05,0.1,0.2], 'min_samples_split':[150],'min_samples_leaf':[50], 'max_depth':[3,5],'max_features':['sqrt',10]} clf_gbm = do_classify(GradientBoostingClassifier, params, top_ten_prediction, feature_cols, 'top_ten_hit',1) pd.DataFrame({"feature":feature_cols,"feature_importance":clf_gbm.feature_importances_}) # GBM Tree gives an AUC of 0.64, which is the largest of any model. Similar to the Random Forest Classifier lyrical diversity and year are the most important features, whereas in this case duration is the third most significant feature. # # Conclusion # # From our investigation into key popularity we have found that the most popular keys have changed over time. C♯ has now become one of the most popular keys despite not being a very convenient key for piano and guitar. Throughout the history of the Billboard Hot 100, major keys have been more popular than minor keys. # # On the other hand lyrical diversity has steadily decreased on average over the years, and recently the most popular songs feature much more repetition than the other songs on the Billboard Hot 100. The difference in lyrical diversity of top ten songs and songs in the bottom ninety has led it to become one of the most important features in both the random forest and GBM Tree models. # # The best performing models were the Random Forest Classifier and GBM classifier with area under the curve scores of 0.62 and 0.64 respectively. While these scores are not very high, the models still perform better than the trivial prediction. # # In the future it would be nice to incorporate the Million Song Dataset, which contians a measurement of artist "hotness" and genre, which could help to improve our model.
81,208
/Untitled12.ipynb
a1985941283519abf346f4debe5ca9e04282f290
[]
no_license
umith/ANN-Model-accuracy-and-loss-graph
https://github.com/umith/ANN-Model-accuracy-and-loss-graph
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
1,053
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/trong-shen/Game-of-Throne-Project/blob/master/GOT_predictive_model_part_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="Kvbsdq0-guNm" colab_type="text" # Load the CSV file from Github # + id="Xt3EaJ_AgV2I" colab_type="code" outputId="8f1d9f43-f5bf-43eb-b1ea-61bc74e02606" colab={"base_uri": "https://localhost:8080/", "height": 1000} import pandas as pd import numpy as np import string # !pip install nltk import nltk nltk.download('all') GOT= pd.read_csv('https://raw.githubusercontent.com/trong-shen/Game-of-Throne-Project/master/Game_of_Thrones_Script_clean.csv') char_info=pd.read_csv('https://raw.githubusercontent.com/trong-shen/Game-of-Throne-Project/master/got_table.csv') print(len(GOT)) #Extract only the house data from char_info House=char_info[['name','house']] #Created a function to apply globally to the data frame def return_house(name): house_dict=dict(zip(House.name,House.house)) try: house=house_dict[name] return(house) except KeyError: return(float("Nan")) # Apply the house dict function to the whole GOT dataframe GOT['House']=GOT['Name'].apply(lambda x:return_house(x)) # + id="oT79c3JB_d23" colab_type="code" colab={} #list of contractions and the expanded mapping used for cleaning the data CONTRACTION_MAP = { "ain't": "is not", "aren't": "are not", "can't": "cannot", "can't've": "cannot have", "'cause": "because", "could've": "could have", "couldn't": "could not", "couldn't've": "could not have", "didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not", "hadn't've": "had not have", "hasn't": "has not", "haven't": "have not", "he'd": "he would", "he'd've": "he would have", "he'll": "he will", "he'll've": "he he will have", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is", "I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have", "I'm": "I am", "I've": "I have", "i'd": "i would", "i'd've": "i would have", "i'll": "i will", "i'll've": "i will have", "i'm": "i am", "i've": "i have", "isn't": "is not", "it'd": "it would", "it'd've": "it would have", "it'll": "it will", "it'll've": "it will have", "it's": "it is", "let's": "let us", "l": "i", "ma'am": "madam", "mayn't": "may not", "might've": "might have", "mightn't": "might not", "mightn't've": "might not have", "must've": "must have", "mustn't": "must not", "mustn't've": "must not have", "needn't": "need not", "needn't've": "need not have", "o'clock": "of the clock", "oughtn't": "ought not", "oughtn't've": "ought not have", "shan't": "shall not", "sha'n't": "shall not", "shan't've": "shall not have", "she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have", "she's": "she is", "should've": "should have", "shouldn't": "should not", "shouldn't've": "should not have", "so've": "so have", "so's": "so as", "that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would", "there'd've": "there would have", "there's": "there is", "they'd": "they would", "they'd've": "they would have", "they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have", "wasn't": "was not", "we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are", "we've": "we have", "weren't": "were not", "what'll": "what will", "what'll've": "what will have", "what're": "what are", "what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is", "where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have", "why's": "why is", "why've": "why have", "will've": "will have", "won't": "will not", "won't've": "will not have", "would've": "would have", "wouldn't": "would not", "wouldn't've": "would not have", "y'all": "you all", "y'all'd": "you all would", "y'all'd've": "you all would have", "y'all're": "you all are", "y'all've": "you all have", "you'd": "you would", "you'd've": "you would have", "you'll": "you will", "you'll've": "you will have", "you're": "you are", "you've": "you have" } # + id="cle0cONgAULs" colab_type="code" colab={} #define a function, expand_contractions, which takes a string and expands all contractions within the string using contraction_map def expand_contractions(text, contraction_mapping=CONTRACTION_MAP): expanded = '' text = text.lower() #make all text lowercase wordList = text.split() #put text into a list of words for i in range(len(wordList)): if wordList[i] in contraction_mapping.keys(): #for each word, if it is a contraction in the listing expanded = expanded + ' ' + contraction_mapping[wordList[i]] #then replace with the expanded version else: expanded = expanded + ' ' + wordList[i] #otherwise, keep the original word return expanded #define a function, remove_punctuation, which takes in a string and removes all punctuation def remove_punctuation(s): s = s.translate(str.maketrans('','',string.punctuation)) #take out punctuation in the sentence j = nltk.word_tokenize(s.lower()) #put each word in the sentence within a list, j return s #define function, clean_sentences, which removes punctuation and expands all contractions in a sentence def clean_sentences(text): return remove_punctuation(expand_contractions(text)) # + id="YOXNk319AXLj" colab_type="code" colab={} #Expand contractions, remove punctuation all in one function clean_sentence GOT['Sentences_Clean'] = GOT.Sentence.apply(lambda x:clean_sentences(x)) # + id="FS53Yh89CAnb" colab_type="code" colab={} #Calculate words per line, assuming contractions are all expanded GOT["Num_Words"] = GOT.Sentences_Clean.apply(lambda x: len(x.split())) GOT.to_csv (r'GOT_house_csv.csv', index = False, header=True) # + [markdown] id="X5ln7QIcDK6q" colab_type="text" # # Now we will prepare data for our Machine Learning Predictive model by only looking at mostly season 1 data and a subsection of season 2 data # # + id="VdrjeoBTukgp" colab_type="code" outputId="a73fbfd9-d031-4673-b85d-077e51d9be3b" colab={"base_uri": "https://localhost:8080/", "height": 850} #Extract only season one data and two data GOT1=GOT[GOT.Season=="Season 1"] GOT2=GOT[GOT.Season=="Season 2"] print(GOT1.head()) print(GOT1.info()) print(GOT2.head()) print(GOT2.info()) # + id="TOo0mZPmOaqr" colab_type="code" outputId="276e8edc-b23c-42f6-cb81-193d0c35d489" colab={"base_uri": "https://localhost:8080/", "height": 1000} #Tokenize the words and remove between words punctunations tokenizer=nltk.RegexpTokenizer(r"\w+") GOT1['Tokenized_Sentence']=GOT1.Sentences_Clean.apply(lambda x:tokenizer.tokenize(x.lower())) GOT2['Tokenized_Sentence']=GOT2.Sentences_Clean.apply(lambda x:tokenizer.tokenize(x.lower())) GOT1.head(100) # + id="zyBF3RVqPFf1" colab_type="code" colab={} #Remove stopwords for sentiment analysis stopword=nltk.corpus.stopwords.words('english') # + id="0J8WRWnGeFvN" colab_type="code" colab={} #Define a function to remove stop words def remove_stopwords(tokenized_sentence): text=[word for word in tokenized_sentence if word not in stopword] return text # + id="BGBoLfGGefi_" colab_type="code" outputId="f175dca7-7dde-4b13-ad30-6733e4e57a8f" colab={"base_uri": "https://localhost:8080/", "height": 734} #Implement the remove stop words function to a new column GOT1['Tokenized_No_Stop']=GOT1.Tokenized_Sentence.apply(lambda x:remove_stopwords(x)) GOT2['Tokenized_No_Stop']=GOT2.Tokenized_Sentence.apply(lambda x:remove_stopwords(x)) GOT1.head() # + id="BnF0bKWXe63D" colab_type="code" outputId="42b6cda2-96e0-4ba8-8cf6-19a99722a4f0" colab={"base_uri": "https://localhost:8080/", "height": 734} # Stemming of the Non Stop Words Column wn=nltk.WordNetLemmatizer() def stem_reduction(tokenized_sentence): sentence=[wn.lemmatize(word) for word in tokenized_sentence] return (sentence) GOT1['Stemmed_Sentence']=GOT1['Tokenized_No_Stop'].apply(lambda x:stem_reduction(x)) GOT2['Stemmed_Sentence']=GOT2['Tokenized_No_Stop'].apply(lambda x:stem_reduction(x)) GOT1.tail() # + id="DzKLVQ4f9bRK" colab_type="code" outputId="723e8679-1f46-47e9-c072-4040c7f1c008" colab={"base_uri": "https://localhost:8080/", "height": 337} #We will be using the Vader Sentiment analysis tool for sentiment analysis. #The function however works only on a string of sentence instead of a list of words (tokenized sentence) #Join the Tokenized Stemmed_Sentence into a string of sentence for the VADER sentiment analysis def convert_sentence (tokenized_sentence): sentence='' for word in tokenized_sentence: sentence=sentence+" "+word return sentence GOT1['Stemmed_Sentence_Non_Token']=GOT1['Stemmed_Sentence'].apply(lambda x:convert_sentence(x)) GOT2['Stemmed_Sentence_Non_Token']=GOT2['Stemmed_Sentence'].apply(lambda x:convert_sentence(x)) GOT1['Stemmed_Sentence_Non_Token'].head() # + id="ek4IbNjs6ob5" colab_type="code" outputId="a9fbd338-70c3-4745-e614-4c75155ef666" colab={"base_uri": "https://localhost:8080/", "height": 1000} #Use Vader Sentiment analysis tool #Import a define a function for outputing the whole score output and only the compound score from nltk.sentiment.vader import SentimentIntensityAnalyzer sid=SentimentIntensityAnalyzer() GOT1['Sentiment']=GOT1['Stemmed_Sentence_Non_Token'].apply(lambda x:sid.polarity_scores(x)) def compound_score(Sentiment): return(Sentiment['compound']) GOT1['Sentiment_Compound_Score']=GOT1['Sentiment'].apply(lambda x:compound_score(x)) GOT1.head(100) # + id="yijZWz2Agxx-" colab_type="code" outputId="068b1a42-ae37-449c-8396-c1aa8bf187fe" colab={"base_uri": "https://localhost:8080/", "height": 617} #Set the character and house to category GOT1.info() GOT1['Name'].astype('category') GOT1['House'].astype('category') # + id="aI8nrePfiUr7" colab_type="code" outputId="fd888d65-eed9-4c7d-d46c-b377e1befe32" colab={"base_uri": "https://localhost:8080/", "height": 827} #A Pivot Table to Show mean Compiund_Score by House table=pd.pivot_table(GOT1,values='Sentiment_Compound_Score',index=['House'],aggfunc=['mean','count']) table # + [markdown] id="52L_ZjTWlGkO" colab_type="text" # ##Compound score ranges from -1 to 1 with -1 being most negative sentiment, 0 being most neutral, and 1 being most positive. As shown most houses' lines are very close to neutral except for some houses which are futher away. However, they also have lower number of lines as well. In general, characters from all different houses have relatively netural sentences in terms of sentiment. # # ###The counts in the table also equates to number of lines spoken by each house, and for simplicity sake, we will only work with the top 4 houses for future analysis. That is **House Start, Mormont, Lannister, and Targaryen**. # # # + id="CQekypR8oox3" colab_type="code" outputId="12a5da9f-b91b-4b5c-8ef4-6c7c39eaadf2" colab={"base_uri": "https://localhost:8080/", "height": 0} import seaborn as sns import matplotlib.pyplot as plt from scipy import stats #Create a data frame with just the 4 major houses #For ease of processing later, we will redefine a dataframe called GOT1TopHouses with only data of interest for later ML processing GOT1TopHouses=GOT1[GOT1['House'].isin(['House Lannister', 'House Stark','House Targaryen', 'House Mormont'])] print(GOT1TopHouses.info()) #Plotting distributions House_Lannister=GOT1TopHouses[GOT1TopHouses.House=='House Lannister'] House_Targaryen=GOT1TopHouses[GOT1TopHouses.House=='House Targaryen'] House_Stark=GOT1TopHouses[GOT1TopHouses.House=='House Stark'] House_Mormont=GOT1TopHouses[GOT1TopHouses.House=='House Mormont'] # Will only focus on the four main houses Lannister, Targaryen, Stark, and Mormont sns.set({'figure.figsize':(11.7,8.27)}) sns.set_style({'figure.facecolor':'white'}) plt.xlabel('Sentiment Compound Score') plt.ylabel('Probaility Density') sns.distplot(House_Lannister[['Sentiment_Compound_Score']], hist=False, kde=True, label='Lannister') sns.distplot(House_Mormont[['Sentiment_Compound_Score']], hist=False, kde=True, label='Mormont') sns.distplot(House_Targaryen[['Sentiment_Compound_Score']], hist=False,kde=True,label='Targaryen') sns.distplot(House_Stark[['Sentiment_Compound_Score']], hist=False, kde=True,label='Stark') # + [markdown] id="Y3XCAm2HZ0Z7" colab_type="text" # ## As can seen from the above, there doesn't seem to be clear distinct separation in the sentiment distribution. But there is some standard deviation differences between them. We will analyze the standard deviation below. # # + id="cFaYifIectKL" colab_type="code" outputId="c3f5850c-0973-4292-f0e3-1b36bbc0f556" colab={"base_uri": "https://localhost:8080/", "height": 0} #A Pivot Table to Show mean and standard deviation Compiund_Score by House only for the four houses of interest table=pd.pivot_table(GOT1TopHouses,values='Sentiment_Compound_Score',index=['House'],aggfunc=['mean','std','count']) table GOT1TopHouses.head(100) # + [markdown] id="zALasHAYOwhj" colab_type="text" # #Before we proceed with Machine Learning Model predicting whose house line it is, we need to extract more attributes other than sentiment score. # # ##We want attributes based on each house's favourite word. Thus we will first compile a list of favourite words for each house of interest (for the four houses mentioned above) # + id="U_VSHks4Ryjb" colab_type="code" outputId="6d952da3-1f37-4b18-ae56-76270733d46d" colab={"base_uri": "https://localhost:8080/", "height": 347} # First we find the number of words spoken by each top four houses in Season 1 GOT1HouseData = pd.pivot_table(GOT1, values = 'Num_Words', index = 'House', aggfunc=np.sum) GOT1HouseData = GOT1HouseData.sort_values('Num_Words') #find Houses with the most words said GOT1_TopHousesData = GOT1HouseData.tail(10) GOT1_TopHousesData = pd.DataFrame(GOT1_TopHousesData.to_records()) #convert into a dataframe GOT1_TopHousesData # + id="83MDNTE3Qnrc" colab_type="code" colab={} # These are the same functions used in the whole season data analysis protion of this project. # Create a list of words based on house and create a dictionary with the key being the word spoken and value ebing the frequency it was spoken #define a function, house_words, that appends all tokenized_no_stop words for a chosen character in the form of a list def house_words(house): l = [] for index, row in GOT1TopHouses.iterrows(): if house in row.House: l = l + row.Tokenized_No_Stop return l #define a function, house_most_freq, that takes a house and outputs the listing of words spoken & number of times they were spoken #only find the top 100 words def house_most_freq(s): wordlist = house_words(s) #get the list of words said for the house word_dict = {} #create dictionary to track count of unique words for i in wordlist: if i in word_dict.keys(): word_dict[i] = word_dict[i] + 1 else: word_dict[i] = 1 freq = sorted(word_dict.items(), key=lambda x: x[1], reverse=True) #sort by value word_list=[i1 for i1,i2 in freq] #output only the 100 words return freq[0:100] # + [markdown] id="R--SkXYT_2Ln" colab_type="text" # ## Below shows 5 most spoken words for each house and the number of times each was said in season one. # # + id="jyaf3ZOC8ree" colab_type="code" outputId="5afd0895-e6a0-4497-ec21-e57e821480cb" colab={"base_uri": "https://localhost:8080/", "height": 0} house_most_freq('House Lannister')[0:5] # + id="TI3RGw-0-W3i" colab_type="code" outputId="ca5f07e5-930a-4b6b-fd97-aff075ec6243" colab={"base_uri": "https://localhost:8080/", "height": 0} house_most_freq('House Targaryen')[0:5] # + id="LjbxPkwD-esZ" colab_type="code" outputId="b42f145d-1266-4669-9d02-e86ed19fedb4" colab={"base_uri": "https://localhost:8080/", "height": 0} house_most_freq('House Stark')[0:5] # + id="liTjhUU1-ney" colab_type="code" outputId="72e4391d-ad81-4246-da6e-a0ba252100e7" colab={"base_uri": "https://localhost:8080/", "height": 0} house_most_freq('House Mormont')[0:5] # + [markdown] id="FmAV1bGWAHer" colab_type="text" # ### Other than the some general worlds I and would, most houses have relatively distinct word choices: # * ### Lannister has Lord, Stark, and king # * ### Targaryen: Khalessi, Dragon, Want, Know # * ### Stark: Father, Lord, I, Know # * ### Mormont: Khalessi, Man, Good, Think # # # + [markdown] id="LdMxrx_FBJYs" colab_type="text" # ### Now we will create a word choice rscorefor each house of interest (thus 4 attributes). Each line will have a specific house ranking calculated as follows: # ### word_score=frequncy_of_that_word/total number of words spoken by that house # ### House_word_prob=sum(word_score)/number_of_words in that sentence # # + id="QqaTTQEO1JOX" colab_type="code" colab={} #Define a function that calculates the word score in each house which shows how popular it is. def house_word_score(house,sentence): word_list, freq_list=zip(*house_most_freq(house)) house_num_words=int(GOT1_TopHousesData[GOT1_TopHousesData['House']==house].Num_Words) score_sum=0 for word in sentence: try: word_score=freq_list[word_list.index(word)]/house_num_words score_sum=score_sum+word_score except: pass try: return score_sum/(len(sentence)) except: return 0 # + id="ihIfaK7xIJKB" colab_type="code" colab={} #Applying that to every line in the GOT1TopHouse dataframe to create 4 attributes for each house """ GOT1TopHouses['Lannister_Word_Score']=GOT1TopHouses.Stemmed_Sentence.apply(lambda x:house_word_score('House Lannister',x)) GOT1TopHouses['Stark_Word_Score']=GOT1TopHouses.Stemmed_Sentence.apply(lambda x:house_word_score('House Stark',x)) GOT1TopHouses['Targaryen_Word_Score']=GOT1TopHouses.Stemmed_Sentence.apply(lambda x:house_word_score('House Targaryen',x)) GOT1TopHouses['Mormont_Word_Score']=GOT1TopHouses.Stemmed_Sentence.apply(lambda x:house_word_score('House Mormont',x)) """ #The above processing takes too long so we will just import an already processed csv file below GOT1TopHouses_2=pd.read_csv('https://raw.githubusercontent.com/trong-shen/Game-of-Throne-Project/master/GOT1TopHouses.csv') # + id="zP6qN6w7FkwU" colab_type="code" outputId="95060d37-a57c-4e69-9cc5-accf2c21eaf4" colab={"base_uri": "https://localhost:8080/", "height": 0} GOT1TopHouses_2.head(10) # + id="ljt9BGYRnc4A" colab_type="code" outputId="88a647cd-db78-495f-f2d3-4da4d485f4ec" colab={"base_uri": "https://localhost:8080/", "height": 0} #Evaluate the mean of these Word_scores table=pd.pivot_table(GOT1TopHouses_2,values=['Lannister_Word_Score','Stark_Word_Score','Targaryen_Word_Score','Mormont_Word_Score'],index=['House'],aggfunc=['mean']) table # + [markdown] id="4frHxJKlpHHC" colab_type="text" # ###House Stark and House Targaryen have the highest mean Stark_word_Score and Targaryen_word_score respectively, which is reasonable. # ###The other two houses however have the their top two scores being their own house' word score, which is not ideal. # + [markdown] id="iw-5V2Tvot7G" colab_type="text" # Check to see if the distribution for the House Lannister Ranking is distinct # + id="BWlDU0_9oysy" colab_type="code" outputId="e4d2eec5-1af7-46c6-e188-9b693700b6df" colab={"base_uri": "https://localhost:8080/", "height": 0} #Plotting distributions and using KDE as an estimation to each distribution. House_Lannister=GOT1TopHouses_2[GOT1TopHouses_2.House=='House Lannister'] House_Targaryen=GOT1TopHouses_2[GOT1TopHouses_2.House=='House Targaryen'] House_Stark=GOT1TopHouses_2[GOT1TopHouses_2.House=='House Stark'] House_Mormont=GOT1TopHouses_2[GOT1TopHouses_2.House=='House Mormont'] sns.set({'figure.figsize':(11.7,8.27)}) sns.set_style({'figure.facecolor':'white'}) plt.figure(0) plt.xlabel('Lannister_Word_Score') plt.ylabel('Probaility Density') plt.title(' Lannister_Word_Score') sns.distplot(House_Lannister[['Lannister_Word_Score']], hist=False, kde=True, label='Lannister') sns.distplot(House_Mormont[['Lannister_Word_Score']], hist=False, kde=True, label='Mormont') sns.distplot(House_Targaryen[['Lannister_Word_Score']], hist=False,kde=True,label='Targaryen') sns.distplot(House_Stark[['Lannister_Word_Score']], hist=False, kde=True,label='Stark') sns.set({'figure.figsize':(11.7,8.27)}) sns.set_style({'figure.facecolor':'white'}) plt.figure(1) plt.xlabel('Mormont_Word_Score') plt.ylabel('Probaility Density') plt.title(' Mormont_Word_Score') sns.distplot(House_Lannister[['Mormont_Word_Score']], hist=False, kde=True, label='Lannister') sns.distplot(House_Mormont[['Mormont_Word_Score']], hist=False, kde=True, label='Mormont') sns.distplot(House_Targaryen[['Mormont_Word_Score']], hist=False,kde=True,label='Targaryen') sns.distplot(House_Stark[['Mormont_Word_Score']], hist=False, kde=True,label='Stark') sns.set({'figure.figsize':(11.7,8.27)}) sns.set_style({'figure.facecolor':'white'}) plt.figure(2) plt.xlabel('Targaryen_Word_Score') plt.ylabel('Probaility Density') plt.title('Targaryen_Word_Score') sns.distplot(House_Lannister[['Targaryen_Word_Score']], hist=False, kde=True, label='Lannister') sns.distplot(House_Mormont[['Targaryen_Word_Score']], hist=False, kde=True, label='Mormont') sns.distplot(House_Targaryen[['Targaryen_Word_Score']], hist=False,kde=True,label='Targaryen') sns.distplot(House_Stark[['Targaryen_Word_Score']], hist=False, kde=True,label='Stark') sns.set({'figure.figsize':(11.7,8.27)}) sns.set_style({'figure.facecolor':'white'}) plt.figure(3) plt.xlabel('Stark_Word_Score') plt.ylabel('Probaility Density') plt.title(' Stark_Word_Score') sns.distplot(House_Lannister[['Stark_Word_Score']], hist=False, kde=True, label='Lannister') sns.distplot(House_Mormont[['Stark_Word_Score']], hist=False, kde=True, label='Mormont') sns.distplot(House_Targaryen[['Stark_Word_Score']], hist=False,kde=True,label='Targaryen') sns.distplot(House_Stark[['Stark_Word_Score']], hist=False, kde=True,label='Stark') # + [markdown] id="m-dV0uULFkub" colab_type="text" # ####We can see despite the mean value of the word score not entirely clear at distingusihing betwen the house, you can see the KDE estimation distribution skewed to the right for that particular house. For example, House Stark has a ,house_word_score distribution skewed most to the right (higher Stark_Word_score), and so on and so forth for each house and its house_word_score. This means it's likely that we can still proceed with ML to acheive a decent ML using this feature. # + [markdown] id="b8oa6LMmxaNM" colab_type="text" # ### We would also like to explore if there ia any difference in number of words spoken in a line for each house. Number of words spoken can be used to indicate if this person is more outspoken or no or if they are more concise or not. # ### We can't however just use our attribute Num_Words as some rows have multiple words per line. # # ###We will idneitfy the number of rows in each line based on common punctuation such as comma and question mark # + id="6lFZ-6Rf4Bt7" colab_type="code" outputId="7d06a601-aa50-4be7-c277-b52ac3a0c594" colab={"base_uri": "https://localhost:8080/", "height": 0} #Count number of lines per row of data frame based on punctuation def count_punc(sentence): counter=0 num_char=len(sentence) for character in sentence: if character=="." and sentence[sentence.find(character)-1]!=".": counter=counter+1 elif character=="…": counter=counter+1 elif character=="?" or character=="!": counter=counter+1 elif character==sentence[-1] and counter==0: counter=1 return counter #add another column of Num_Words_per_line GOT1TopHouses_2['Num_of_lines']=GOT1TopHouses_2['Sentence'].apply(lambda x:count_punc(x)) GOT1TopHouses_2['Num_Words_per_line']=GOT1TopHouses_2['Num_Words']/GOT1TopHouses_2['Num_of_lines'] GOT1TopHouses_2.head(100) # + id="PUoomWFox6Qk" colab_type="code" outputId="3d079c26-ecde-4377-bc03-5f619f2d7051" colab={"base_uri": "https://localhost:8080/", "height": 0} #Evaluate the mean and count of num_words spoken per line to see if there is difference between them table=pd.pivot_table(GOT1TopHouses_2,values=['Num_Words_per_line'],index=['House'],aggfunc=['count','mean']) table # + id="4PkwoIUQyNOF" colab_type="code" outputId="aaba466e-ffd8-4d09-f906-78a211881b02" colab={"base_uri": "https://localhost:8080/", "height": 0} #Plotting distribution of number of words per line. House_Lannister=GOT1TopHouses_2[GOT1TopHouses_2.House=='House Lannister'] House_Targaryen=GOT1TopHouses_2[GOT1TopHouses_2.House=='House Targaryen'] House_Stark=GOT1TopHouses_2[GOT1TopHouses_2.House=='House Stark'] House_Mormont=GOT1TopHouses_2[GOT1TopHouses_2.House=='House Mormont'] sns.set({'figure.figsize':(11.7,8.27)}) sns.set_style({'figure.facecolor':'white'}) plt.figure(0) plt.xlabel('Num_Words_Per_Line') plt.ylabel('Probaility Density') plt.title(' Words Per Line') sns.distplot(House_Lannister[['Num_Words_per_line']], hist=False, kde=True, label='Lannister') sns.distplot(House_Mormont[['Num_Words_per_line']], hist=False, kde=True, label='Mormont') sns.distplot(House_Targaryen[['Num_Words_per_line']], hist=False,kde=True,label='Targaryen') sns.distplot(House_Stark[['Num_Words_per_line']], hist=False, kde=True,label='Stark') # + [markdown] id="W2EbYW8YzOOF" colab_type="text" # ### As can be seen there is no distinct distribution difference in number of words based on the KDE estimation. This will pose a problem for our machine learning model. We will not use with this **attribute** for the machine learning model # # # + id="blF1akPP1yH1" colab_type="code" outputId="27c7854c-fe3b-4514-8c4f-65a5e7df66c2" colab={"base_uri": "https://localhost:8080/", "height": 0} GOT1TopHouses_2.head(10) # + [markdown] id="V_OC-3MZZt9W" colab_type="text" # For the last part, we will randomly select entries in season 2 process them so they have the same quantitative attributes, Num_Words, Num_Words_per_line, Sentiment_Compound_Score, Lannister_Word_Score, Mormont_Word_Score, Targaryen_Word_Score,and Stark_Word_Score. # # We would like to use them as our test data. # + id="yip5ikx6ZtnQ" colab_type="code" colab={} #Extract only the top 4 houses of interest and only sample 400 as that is how many we need as test set '''GOT2TopHouses=GOT2[GOT2['House'].isin(['House Lannister', 'House Stark','House Targaryen', 'House Mormont'])] GOT2TopHouses=GOT2TopHouses.sample(n=400) #Create Sentiment Score GOT2TopHouses['Stemmed_Sentence_Non_Token']=GOT2TopHouses['Stemmed_Sentence'].apply(lambda x:convert_sentence(x)) GOT2TopHouses['Sentiment']=GOT2TopHouses['Stemmed_Sentence_Non_Token'].apply(lambda x:sid.polarity_scores(x)) GOT2TopHouses['Sentiment_Compound_Score']=GOT2TopHouses['Sentiment'].apply(lambda x:compound_score(x)) #Compose the Word Score for each House GOT2TopHouses['Lannister_Word_Score']=GOT2TopHouses.Stemmed_Sentence.apply(lambda x:house_word_score('House Lannister',x)) GOT2TopHouses['Stark_Word_Score']=GOT2TopHouses.Stemmed_Sentence.apply(lambda x:house_word_score('House Stark',x)) GOT2TopHouses['Targaryen_Word_Score']=GOT2TopHouses.Stemmed_Sentence.apply(lambda x:house_word_score('House Targaryen',x)) GOT2TopHouses['Mormont_Word_Score']=GOT2TopHouses.Stemmed_Sentence.apply(lambda x:house_word_score('House Mormont',x)) ''' #The porcessing time takes too long so we will jsut use a preprocessed GOT2TopHouses_2=pd.read_csv('https://raw.githubusercontent.com/trong-shen/Game-of-Throne-Project/master/GOT2TopHouses.csv') # + [markdown] id="sU4qnU14SqWo" colab_type="text" # #*Applying machine learning models on data set - KNN, Naive Bays, Decision Tree, and Random Forest* # # # # # + [markdown] id="P6e8FU2k15F3" colab_type="text" # Our overall research question is, can we predict a given line, which house the character who spoke the line is from? # # In this section, we look at applying different Machine Learning approaches to the sub dateset which includes the houses with the most sentence records. We explore four different approaches and apply them on both quantitative and qualitative variables. The goal here is to determine whether we can use certain variables to predict future sentences for a particular house. # + [markdown] id="tm5Y7fkJLUc7" colab_type="text" # First, we determine which houses should be included into our data set for predictive machine learning models. # + id="zvfbYOlhSpbq" colab_type="code" outputId="8aa77796-8632-4c6c-f966-bc96c8f28a14" colab={"base_uri": "https://localhost:8080/", "height": 183} #For Season 1 print(GOT1TopHouses_2['House'].value_counts()) #For Season 2 print(GOT2TopHouses_2['House'].value_counts()) # + id="EiKaw5oNXQxQ" colab_type="code" colab={} #Select a subset that only contains the top four house records in terms of records. GOT1_ML=GOT1TopHouses_2 GOT2_ML=GOT2TopHouses_2 # + [markdown] id="B27v0rAaLpWd" colab_type="text" # **[Approach 1]** In this section, we explore KNN classification with quantitative variables in our sub data set. X=**["Num_Words","Sentiment_Compound_Score","Lannister_Word_Score","Stark_Word_Score","Targaryen_Word_Score","Mormont_Word_Score"]**, and y=**["House"]** # + id="40f_rEFTL0Df" colab_type="code" outputId="cf84727d-3576-4981-ded3-ec1ba59e2048" colab={"base_uri": "https://localhost:8080/", "height": 500} GOT1_ML.info() # + id="F4Lvr5nGM0Xf" colab_type="code" colab={} #Create the X and Y Vector #X2 and Y2 are data from season 2 with 400 rows as test set for the model X=GOT1_ML[["Sentiment_Compound_Score","Lannister_Word_Score","Stark_Word_Score","Targaryen_Word_Score","Mormont_Word_Score"]] y=GOT1_ML["House"] X2=GOT2_ML[["Sentiment_Compound_Score","Lannister_Word_Score","Stark_Word_Score","Targaryen_Word_Score","Mormont_Word_Score"]] y2=GOT2_ML['House'] # + id="aAPCsdD_M0OM" colab_type="code" colab={} from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier # + id="pf0CpkhVRiV7" colab_type="code" colab={} X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=42) knn=KNeighborsClassifier(n_neighbors=9) knn.fit(X_train,y_train) y_hat=knn.predict(X_test) y_hat_2=knn.predict(X2) # We did some trial-and-error and found that using 9 closest neighbours, we have the best accuracy score. # + id="G0Cu1w2NR8mP" colab_type="code" outputId="84f2a487-448c-42fa-eda1-dc9d565f0030" colab={"base_uri": "https://localhost:8080/", "height": 50} #Calculate the accuracy score from sklearn.metrics import accuracy_score knn_s1=accuracy_score(y_test,y_hat) knn_s2=accuracy_score(y2,y_hat_2) print("From Knn, Accuracy of season 1 tested on season 1 model is ", knn_s1) print("From Knn, Accuracy of season 2 data tested on season 1 model is ", knn_s2) # + [markdown] id="NP-QikawSOmc" colab_type="text" # **[Approach 2]**In this section, we use the bag-of-words model to represent sentence data with machine learning algorithms. This means the only input is a matrix of all the words used in each line, and recorded of their frequency in each row of data. Particularly, we explore three machine learning models that work well with natural language processing: # **naive bayes, decisio trees, and random forest**. # + id="Rd7m59AbkY9a" colab_type="code" colab={} #Before conducting training and testing set, remove NA value. GOT1_ML.dropna(inplace=True) GOT2_ML.dropna(inplace=True) # + id="rWum89SfXpmc" colab_type="code" colab={} #Prepare training and testing dataset NewSentence1=GOT1_ML["Stemmed_Sentence_Non_Token"].tolist() NewSentence2=GOT2_ML["Stemmed_Sentence_Non_Token"].tolist() #create bag od words model -> rows represent each line, each word is one column from sklearn.feature_extraction.text import CountVectorizer #Make sure only keep the top 1500 most frequent words for analysis cv=CountVectorizer(max_features=1000) #Create X and y -> X should be a large array where (# of records,#of words),y should be the "name" in GOT1 X1=cv.fit_transform(NewSentence1).toarray() y1=GOT1_ML.iloc[:,7].values X2=cv.fit_transform(NewSentence2).toarray() y2=GOT2_ML.iloc[:,7].values #Make training and test sets for Naive Bays from sklearn.model_selection import train_test_split X1_train, X1_test, y1_train, y1_test = train_test_split(X1,y1,test_size=0.25,random_state=0) # + [markdown] id="K6zkdNx1yQag" colab_type="text" # Model One - Naive Bayes # + id="Y5T14Ub3X5ZP" colab_type="code" outputId="04b23746-adc6-4454-e9e7-9d1682943080" colab={"base_uri": "https://localhost:8080/", "height": 70} #Fitting Naive Bays into the training set from sklearn.naive_bayes import GaussianNB classifier1=GaussianNB() classifier1.fit(X1_train,y1_train) #Predicting the test set y1_pred_nb = classifier1.predict(X1_test) y2_pred_nb = classifier1.predict(X2) from sklearn.metrics import accuracy_score nb_s1=accuracy_score(y1_test,y1_pred_nb) nb_s2=accuracy_score(y2,y2_pred_nb) print("From Naive Bayes, accuracy of season 1 tested on season 1 model is ", nb_s1) print("From Naive Bayes, accuracy of season 2 data tested on season 1 model is ", nb_s2) # + [markdown] id="MmaC6depydD8" colab_type="text" # Model Two - Decision Tree # + id="GJISVqoOyaQR" colab_type="code" outputId="54c56767-23f3-43d7-a428-5d3f63068787" colab={"base_uri": "https://localhost:8080/", "height": 70} from sklearn.tree import DecisionTreeClassifier classifier2=DecisionTreeClassifier(criterion='entropy',random_state=0) classifier2.fit(X1_train,y1_train) y1_pred_tree = classifier2.predict(X1_test) y2_pred_tree = classifier2.predict(X2) from sklearn.metrics import accuracy_score tree_s1=accuracy_score(y1_test,y1_pred_tree) tree_s2=accuracy_score(y2,y2_pred_tree) print("From Decision tree model, Accuracy of season 1 tested on season 1 model is ", tree_s1) print("From Decision tree model, Accuracy of season 2 data tested on season 1 model is ", tree_s2) # + [markdown] id="5obIq_7cyszZ" colab_type="text" # Model Three - Random Forest # + id="-xdSH3jtyrtE" colab_type="code" outputId="3db69305-fccc-4bcc-8408-99da3ea2953f" colab={"base_uri": "https://localhost:8080/", "height": 70} from sklearn.ensemble import RandomForestClassifier classifier3=RandomForestClassifier(n_estimators=10,criterion='entropy',random_state=0) classifier3.fit(X1_train,y1_train) y1_pred_random = classifier3.predict(X1_test) y2_pred_random = classifier3.predict(X2) from sklearn.metrics import accuracy_score random_s1=accuracy_score(y1_test,y1_pred_random) random_s2=accuracy_score(y2,y2_pred_random) print("From Random tree, Accuracy of season 1 tested on season 1 model is ", random_s1) print("From Random tree, Accuracy of season 2 data tested on season 1 model is ", random_s2) # + id="gDpJDDTuy3Ay" colab_type="code" outputId="65b79cfe-e8a1-439f-c3ff-ac1e9782af71" colab={"base_uri": "https://localhost:8080/", "height": 167} # Create an accuracy table that combines all the different method s1_ml_result=[knn_s1,nb_s1,tree_s1,random_s1] s2_ml_result=[knn_s2,nb_s2,tree_s2,random_s2] result={'Season 1':s1_ml_result,'Season 2':s2_ml_result} result_ml=pd.DataFrame(result,columns=['Season 1','Season 2'],index=['KNN','Naive Bays','Decision Tree','Random Forest']) result_ml # + [markdown] id="CPMCgjj48HYM" colab_type="text" # From above models, we can draw a conclusion that the prediction model based on words does not have high performance. The best prediction model among all is KNN, which is at 53% with Season 1 data as the test data. We can not simply use the word selection to predict their house. In order to perform more accurate prediction, extra data sets and variables should be also included in any future analysis. # # Areas for improvement for KNN: # Sentiment score: Accuracy may increase if we keep only non-neutral words to get better distinct sentiment score. # The word_score based on houses may improve, if we remove common words between houses # There may be other attributes we could create to help us differentiate between house.
36,392
/creating_designing_sqlite_database_python.ipynb
c30b30473a0c8c8002d15229d2ca19840ae217f9
[]
no_license
guoqi228/creating_designing_database_sqlite_python
https://github.com/guoqi228/creating_designing_database_sqlite_python
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
236,446
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All). # # Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below: NAME = "" COLLABORATORS = "" # --- # <!--NOTEBOOK_HEADER--> # *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks); # content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* # <!--NAVIGATION--> # < [Predicting the ∆∆G of single point mutations](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/15.02-Membrane-Protein-ddG-of-mutation.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Distributed analysis example: exhaustive ddG PSSM](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.01-PyData-ddG-pssm.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.00-Running-PyRosetta-in-Parellel.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a> # # Running Rosetta in Parellel # # ## Notes # # The following notebooks contain examples of how to run Rosetta in parellel either locally on a compute cluster through the `pyrosetta.distributed` package that is now included with PyRosetta. # # The first three notebooks contain two high-level examples of parellel PyRosett through a Rosetta-PyData integration using the pyrosetta.distributed namespace. # # ## Setup # # **For Chapter 16, Running PyRosetta in Parallel, you will need to use a specific version of PyRosetta that is built for parallelization.** This is the serialization build. Besides manually building it from the Rosetta C++ source code, the general way to obtain this is through the use of a `conda` environment. # # A `conda` environment is a way to run code that has specific versions of required packages, that instead of being installed globally, will be installed as a local virtual environment that you may run whenever you wish. This is extremely useful when some packages require specific versions of other packages, as is the case for some rosetta distributed code. # # You will need to pass the username and password of PyRosetta to conda. # In order to do this, we will create a file in your home directory called `~/.condarc`. The file should look like: # ``` # channels: # - https://USERNAME:[email protected] # - defaults # ``` # **Here, instead of USERNAME and PASSWORD, enter the USERNAME and PASSWORD you were given while gaining access to PyRosetta.** # # If you already have this file, please edit it instead of overriding it (below). # Using python: import os condarc = os.path.join(os.environ["HOME"], ".condarc") if not os.path.exists(condarc): with open(condarc, "w") as f: f.write("channels:\n") f.write(" - https://{USERNAME}:{PASSWORD}@conda.graylab.jhu.edu\n".format( USERNAME="USERNAME", PASSWORD="PASSWORD") ) f.write(" - defaults\n") # Alternatively, using bash: # ``` # # echo "channels:" >> $HOME/.condarc # # echo " - https://USERNAME:[email protected]" >> $HOME/.condarc # # echo " - defaults" >> $HOME/.condarc # ``` # Create the conda environment with the provided `environment.yml` file: # # >conda env create -f environment.yml # # then activate your environment: # # >conda activate PyRosetta.notebooks # # # Each time you wish to run this environment, use `conda activate PyRosetta.notebooks` to create the local virtual environment. You may wish to put this in your system configuration on startup. # # For your new conda environment to show up as a kernel option in Jupyter, you may have to register your custom kernel with Jupyter: # # > python -m ipykernel install --user --name PyRosetta.notebooks # # Installed kernels are listed with: # # > jupyter kernelspec list # # # # **NOTE:** # When using a notebook with this environment - the python **Kernel** must be set to this env. *In the following notebooks, this is done for you*, but if you wish to use this environment in other notebooks, make sure to manually change this! You can do this by looking at the jupyter menu - `Kernel` is after `Cell` and before `Widgets`. The option is 'Change Kernel`. This is how you would run python2 vs python3 or run a kernal with other conda environments you have installed on your computer. # # ## Citation # Citation PyData integration notebooks: # # [Integration of the Rosetta Suite with the Python Software Stack via reproducible packaging and core programming interfaces for distributed simulation](https://doi.org/10.1002/pro.3721) # # Alexander S. Ford, Brian D. Weitzner, Christopher D. Bahl # # ## Manual # Documentation for the `pyrosetta.distributed` namespace can be found here: https://nbviewer.jupyter.org/github/proteininnovation/Rosetta-PyData_Integration/blob/master/distributed_overview.ipynb # **Chapter contributors:** # # - Jason C. Klima (University of Washington; Lyell Immunopharma) # - Brian Weitzner (University of Washington; Lyell Immunopharma) # - Jared Adolf-Bryfogle (Scripps; Institute for Protein Innovation) # <!--NAVIGATION--> # < [Predicting the ∆∆G of single point mutations](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/15.02-Membrane-Protein-ddG-of-mutation.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Distributed analysis example: exhaustive ddG PSSM](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.01-PyData-ddG-pssm.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/16.00-Running-PyRosetta-in-Parellel.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a> where the first row is 1, the second row is 2, etc. # - Insert a new column using a custom format. # # #### Because we have not yet normalized our data, it's better not to start with a compound primary key - if we do this, we might end up needing to create a compound key in another table that includes this compound key, which would quickly become cumbersome to work with. An integer primary key is a good choice, but we should first explore whether there is a system for uniquely identifying each game. If they do, this is a better option. It means that if at some later stage we choose to incorporate more detailed game data into our database, the keys we use will be compatible with other sources. # # #### 1. Using the Python SQLite library # The Python SQLite library gives us ultimate control when importing data. We will first need to get the data into Python - we might choose to use the csv module for this. Next, we would use the Cursor.execute() method to create a table for our data. # # Lastly, we can use the Cursor.executemany() method to insert multiple rows of data in a single command. If we create our connection object with a filename that doesn't exist, the sqlite module will create the database file for us. # # We should take advantage of the ? placeholder value syntax instead of using python string formatting to prevent SQL injection attacks and maintain the correct data types. Even though in this project we won't be running any external user code, this is an extremely good habit to get into. # # The advantage of this method is that we have the highest level of control over what we're doing. Additionally, if we have larger data, we can write a loop that iterates over our source line by line so that we don't have to read all of it into memory at once. # # The disadvantage is that there is a lot of manual data handling required. # # #### 2. Using Pandas # # The pandas library includes a handy DataFrame.to_sql() method that we can use to send the contents of a dataframe to a SQLite connection object. We can either create the table first using the method above, or if the table does not exist, pandas will create it for us. # # The advantage of this method is that it can often be done with a line or two of code. The disadvantage is that pandas may alter the data as it reads it in and converts the columns to types automatically. Additionally, this requires the data to be small enough to be able to be stored in-memory using pandas. # # #### 3. From the SQLite shell # # The last method is to use the SQLite shell to import the data. Like the pandas method, we can either create the table manually ourselves, or rely on SQLite to do it for us. # # This is one of the quickest methods to use and works well with large data sources. There are several minor inconveniences to this method. SQLite detects the column types using the first row of data, which can lead to incorrect types. You'll need SQLite shell access, which you won't always have. Lastly, if you want to create the table yourself, you will need to remove the header from the first line of your CSV, otherwise SQLite will make that the first row of your table. # + # create the run_command() and run_query() functions DB = "mlb.db" def run_query(q): with sqlite3.connect(DB) as conn: return pd.read_sql(q,conn) def run_command(c): with sqlite3.connect(DB) as conn: conn.execute('PRAGMA foreign_keys = ON;') conn.isolation_level = None conn.execute(c) def show_tables(): q = ''' SELECT name, type FROM sqlite_master WHERE type IN ("table","view"); ''' return run_query(q) # + # Use DataFrame.to_sql() to create tables for each of our # dataframes in a new SQLite database, mlb.db tables = { "game_log": log, "person_codes": person, "team_codes": team, "park_codes": park } with sqlite3.connect(DB) as conn: for name, data in tables.items(): conn.execute("DROP TABLE IF EXISTS {};".format(name)) data.to_sql(name,conn,index=False) # - show_tables() # + # Using run_command(), create a new column in the # game_log table called game_id: # Use SQL string concatenation to update the new columns # with a unique ID c1 = """ ALTER TABLE game_log ADD COLUMN game_id TEXT; """ # try/except loop since ALTER TABLE # doesn't support IF NOT EXISTS try: run_command(c1) except: pass c2 = """ UPDATE game_log SET game_id = date || h_name || number_of_game /* WHERE prevents this if it has already been done */ WHERE game_id IS NULL; """ run_command(c2) q = """ SELECT game_id, date, h_name, number_of_game FROM game_log LIMIT 5; """ run_query(q) # - # ### Database Normalization # # #### Repetition in columns # We have three columns that relate to one player, followed by three columns that relate to another player. We could restructure our data to remove this repetition - we would need to add an extra column to include the data that was previously only contained in the name of the column # # #### Non-primary key columns should be attributes of the primary key # The primary key of our game log is our game_id, but the players name are not attributes of a game, but of the player id. If the only data we had was the game log, we would remove this column and create a new table that had the names of each player. As it happens, our person_codes table already has a list of our player IDs and names, so we can remove these without the need for creating a new table first. # # #### Redundant Data # We want to eliminate any redundant data - that is, columns where the data is available elsewhere. # # ### Normalization Opportunities # # #### The following are opportunities for normalization of our data: # # - In person_codes, all the debut dates will be able to be reproduced using game log data. # - In team_codes, the start, end and sequence columns will be able to be reproduced using game log data. # - In park_codes, the start and end years will be able to be reproduced using game log data. While technically the state is an attribute of the city, we might not want to have a an incomplete city/state table so we will leave this in. # - There are lots of places in game log where we have a player ID followed by the players name. We will be able to remove this and use the name data in person_codes # - In game_log, all offensive and defensive stats are repeated for the home team and the visiting team. We could break these out and have a table that lists each game twice, one for each team, and cut out this column repetition. # - Similarly, in game_log, we have a listing for 9 players on each team with their positions - we can remove these and have one table that tracks player appearances and their positions. # - We can do a similar thing with the umpires from game_log, instead of listing all four positions as columns, we can put the umpires either in their own table or make one table for players, umpires and managers. # - We have several awards in game_log like winning pitcher and losing pitcher. We can either break these out into their own table, have a table for awards, or combine the awards in with general appearances like the players and umpires. # # ### Schema Diagram # #### Now that we've started to think about normalization ideas, it's time to start planning our schema. The best way to work is visually with a schema diagram, just like the ones we've used so far in this course. Start by creating a diagram of the four existing tables and their columns, and then gradually create new tables that move the data into a more normalized state. # # #### Some people like to do this on paper, others use diagramming tools like Sketch or Figma, others like using Photoshop or similar. Our recommendation is that the best way to do this is using a schema designing tool like DbDesigner.net. This free tool allows you to create a schema and will create lines to show foreign key relations clearly. # # ![schema](https://raw.githubusercontent.com/dataquestio/solutions/84a5fe20efcb4f14a8d508ff4b4c98d1ab80923a/images/schema-screenshot.png) # #### The tables we will create are below, with some notes on the normalization choices made: # # - person # - Each of the 'debut' columns have been omitted, as the data will be able to be found from other tables. # - Since the game log file has no data on coaches, we made the decision to not include this data. # - park # - The start, end, and league columns contain data that is found in the main game log and can be removed. # - league # - Because some of the older leagues are not well known, we will create a table to store league names. # - appearance_type # - Our appearance table will include data on players with positions, umpires, managers, and awards (like winning pitcher). This table will store information on what different types of appearances are available. # # ### Creating Tables Without Foreign Keys # # #### We'll first create each table, and then we'll insert the data. # + # Create the person table with columns # and primary key as shown in the schema diagram # Select the appropriate type based on the data. # Insert the data from the person_codes table. # Write a query to display the first few rows of the table. c1 = """ CREATE TABLE IF NOT EXISTS person ( person_id TEXT PRIMARY KEY, first_name TEXT, last_name TEXT ); """ c2 = """ INSERT OR IGNORE INTO person SELECT id, first, last FROM person_codes; """ q = """ SELECT * FROM person LIMIT 5; """ run_command(c1) run_command(c2) run_query(q) # + # Create the park table with columns and primary key as shown in the schema diagram. # Select the appropriate type based on the data # Insert the data from the park_codes table. # Write a query to display the first few rows of the table. c1 = """ CREATE TABLE IF NOT EXISTS park ( park_id TEXT PRIMARY KEY, name TEXT, nickname TEXT, city TEXT, state TEXT, notes TEXT ); """ c2 = """ INSERT OR IGNORE INTO park SELECT park_id, name, aka, city, state, notes FROM park_codes; """ q = """ SELECT * FROM park LIMIT 5; """ run_command(c1) run_command(c2) run_query(q) # + # Create the league table with columns and primary key as shown in the schema diagram. # Select the appropriate type based on the data. # Insert the data manually based on your research on the names of the six league IDs. # Write a query to display the table. c1 = """ CREATE TABLE IF NOT EXISTS league ( league_id TEXT PRIMARY KEY, name TEXT ); """ c2 = """ INSERT OR IGNORE INTO league VALUES ("NL", "National League"), ("AL", "American League"), ("AA", "American Association"), ("FL", "Federal League"), ("PL", "Players League"), ("UA", "Union Association") ; """ q = """ SELECT * FROM league """ run_command(c1) run_command(c2) run_query(q) # + # Create the appearance_type table with columns and primary key as shown in the schema diagram. # Select the appropriate type based on the data. # Import and insert the data from appearance_type.csv. # Write a query to display the table. c1 = "DROP TABLE IF EXISTS appearance_type;" run_command(c1) c2 = """ CREATE TABLE appearance_type ( appearance_type_id TEXT PRIMARY KEY, name TEXT, category TEXT ); """ run_command(c2) appearance_type = pd.read_csv('appearance_type.csv') with sqlite3.connect('mlb.db') as conn: appearance_type.to_sql('appearance_type', conn, index=False, if_exists='append') q = """ SELECT * FROM appearance_type; """ run_query(q) # - # #### Now that we have added all of the tables that don't have foreign key relationships, lets add the next two tables. The game and team tables need to exist before our two appearance tables are created. # # ### Adding The Team and Game Tables # # #### Here are some notes on the normalization choices made with each of these tables: # # - team # - The start, end, and sequence columns can be derived from the game level data. # - game # - We have chosen to include all columns for the game log that don't refer to one specific team or player, instead putting those in two appearance tables. # - We have removed the column with the day of the week, as this can be derived from the date. # - We have changed the day_night column to day, with the intention of making this a boolean column. Even though SQLite doesn't support the BOOLEAN type, we can use this when creating our table and SQLite will manage the underlying types behind the scenes (for more on how this works refer to the SQLite documentation. This means that anyone quering the schema of our database in the future understands how that column is intended to be used. # + # Create the team table with columns, primary key, and foreign key as shown in the schema diagram. # Select the appropriate type based on the data. # Insert the data from the team_codes table. # Write a query to display the first few rows of the table. c1 = """ CREATE TABLE IF NOT EXISTS team ( team_id TEXT PRIMARY KEY, league_id TEXT, city TEXT, nickname TEXT, franch_id TEXT, FOREIGN KEY (league_id) REFERENCES league(league_id) ); """ c2 = """ INSERT OR IGNORE INTO team SELECT team_id, league, city, nickname, franch_id FROM team_codes; """ q = """ SELECT * FROM team LIMIT 5; """ run_command(c1) run_command(c2) run_query(q) # + # Create the game table with columns, primary key, and foreign key as shown in the schema diagram. # Select the appropriate type based on the data. # Insert the data from the game_log table. # Write a query to display the first few rows of the table. c1 = """ CREATE TABLE IF NOT EXISTS game ( game_id TEXT PRIMARY KEY, date TEXT, number_of_game INTEGER, park_id TEXT, length_outs INTEGER, day BOOLEAN, completion TEXT, forefeit TEXT, protest TEXT, attendance INTEGER, legnth_minutes INTEGER, additional_info TEXT, acquisition_info TEXT, FOREIGN KEY (park_id) REFERENCES park(park_id) ); """ c2 = """ INSERT OR IGNORE INTO game SELECT game_id, date, number_of_game, park_id, length_outs, CASE WHEN day_night = "D" THEN 1 WHEN day_night = "N" THEN 0 ELSE NULL END AS day, completion, forefeit, protest, attendance, length_minutes, additional_info, acquisition_info FROM game_log; """ q = """ SELECT * FROM game LIMIT 5; """ run_command(c1) run_command(c2) run_query(q) # - # #### At this point, because we have told SQLite to enforce foreign key constraints and have inserted data that obeys these contraints, we'll get an error if we try to drop a table or delete rows within a table. # # ### Adding the Team Appearance Table # # #### Our next task is to add the team_appearance table. The team_appearance table has a compound primary key composed of the team name and the game ID. In addition, a boolean column home is used to differentiate between the home and the away team. The rest of the columns are scores or statistics that in our original game log are repeated for each of the home and away teams. # + # Create the team_appearance table with columns, primary key, and foreign keys as shown in the schema diagram. # Select the appropriate type based on the data. # Insert the data from the game_log table, using a UNION clause to combine the data from the column sets for the home and away teams. # Write a query to verify that your data was inserted correctly. c1 = """ CREATE TABLE IF NOT EXISTS team_appearance ( team_id TEXT, game_id TEXT, home BOOLEAN, league_id TEXT, score INTEGER, line_score TEXT, at_bats INTEGER, hits INTEGER, doubles INTEGER, triples INTEGER, homeruns INTEGER, rbi INTEGER, sacrifice_hits INTEGER, sacrifice_flies INTEGER, hit_by_pitch INTEGER, walks INTEGER, intentional_walks INTEGER, strikeouts INTEGER, stolen_bases INTEGER, caught_stealing INTEGER, grounded_into_double INTEGER, first_catcher_interference INTEGER, left_on_base INTEGER, pitchers_used INTEGER, individual_earned_runs INTEGER, team_earned_runs INTEGER, wild_pitches INTEGER, balks INTEGER, putouts INTEGER, assists INTEGER, errors INTEGER, passed_balls INTEGER, double_plays INTEGER, triple_plays INTEGER, PRIMARY KEY (team_id, game_id), FOREIGN KEY (team_id) REFERENCES team(team_id), FOREIGN KEY (game_id) REFERENCES game(game_id), FOREIGN KEY (team_id) REFERENCES team(team_id) ); """ run_command(c1) c2 = """ INSERT OR IGNORE INTO team_appearance SELECT h_name, game_id, 1 AS home, h_league, h_score, h_line_score, h_at_bats, h_hits, h_doubles, h_triples, h_homeruns, h_rbi, h_sacrifice_hits, h_sacrifice_flies, h_hit_by_pitch, h_walks, h_intentional_walks, h_strikeouts, h_stolen_bases, h_caught_stealing, h_grounded_into_double, h_first_catcher_interference, h_left_on_base, h_pitchers_used, h_individual_earned_runs, h_team_earned_runs, h_wild_pitches, h_balks, h_putouts, h_assists, h_errors, h_passed_balls, h_double_plays, h_triple_plays FROM game_log UNION SELECT v_name, game_id, 0 AS home, v_league, v_score, v_line_score, v_at_bats, v_hits, v_doubles, v_triples, v_homeruns, v_rbi, v_sacrifice_hits, v_sacrifice_flies, v_hit_by_pitch, v_walks, v_intentional_walks, v_strikeouts, v_stolen_bases, v_caught_stealing, v_grounded_into_double, v_first_catcher_interference, v_left_on_base, v_pitchers_used, v_individual_earned_runs, v_team_earned_runs, v_wild_pitches, v_balks, v_putouts, v_assists, v_errors, v_passed_balls, v_double_plays, v_triple_plays from game_log; """ run_command(c2) q = """ SELECT * FROM team_appearance WHERE game_id = ( SELECT MIN(game_id) from game ) OR game_id = ( SELECT MAX(game_id) from game ) ORDER By game_id, home; """ run_query(q) # - # #### The final table we need to create is person_appearance # # ### Adding the Person Appearance Table # # #### The person_appearance table will be used to store information on appearances in games by managers, players, and umpires as detailed in the appearance_type table. # # #### We'll need to use a similar technique to insert data as we used with the team_appearance table, however we will have to write much larger queries - one for each column instead of one for each team as before. We will need to work out for each column what the appearance_type_id will be by cross-referencing the columns with the appearance_type table. # # #### We have decided to create an integer primary key for this table, because having every column be a compound primary quickly becomes cumbersome when writing queries. # + # Create the person_appearance table with columns, primary key, and foreign keys as shown in the schema diagram. # Select the appropriate type based on the data. # Insert the data from the game_log table, using UNION clauses to combine the data from the columns for managers, umpires, pitchers, and awards. # Use a loop with string formatting to insert the data for offensive and defensive positions from the game_log table. # Write a query to verify that your data was inserted correctly. c0 = "DROP TABLE IF EXISTS person_appearance" run_command(c0) c1 = """ CREATE TABLE person_appearance ( appearance_id INTEGER PRIMARY KEY, person_id TEXT, team_id TEXT, game_id TEXT, appearance_type_id, FOREIGN KEY (person_id) REFERENCES person(person_id), FOREIGN KEY (team_id) REFERENCES team(team_id), FOREIGN KEY (game_id) REFERENCES game(game_id), FOREIGN KEY (appearance_type_id) REFERENCES appearance_type(appearance_type_id) ); """ c2 = """ INSERT OR IGNORE INTO person_appearance ( game_id, team_id, person_id, appearance_type_id ) SELECT game_id, NULL, hp_umpire_id, "UHP" FROM game_log WHERE hp_umpire_id IS NOT NULL UNION SELECT game_id, NULL, [1b_umpire_id], "U1B" FROM game_log WHERE "1b_umpire_id" IS NOT NULL UNION SELECT game_id, NULL, [2b_umpire_id], "U2B" FROM game_log WHERE [2b_umpire_id] IS NOT NULL UNION SELECT game_id, NULL, [3b_umpire_id], "U3B" FROM game_log WHERE [3b_umpire_id] IS NOT NULL UNION SELECT game_id, NULL, lf_umpire_id, "ULF" FROM game_log WHERE lf_umpire_id IS NOT NULL UNION SELECT game_id, NULL, rf_umpire_id, "URF" FROM game_log WHERE rf_umpire_id IS NOT NULL UNION SELECT game_id, v_name, v_manager_id, "MM" FROM game_log WHERE v_manager_id IS NOT NULL UNION SELECT game_id, h_name, h_manager_id, "MM" FROM game_log WHERE h_manager_id IS NOT NULL UNION SELECT game_id, CASE WHEN h_score > v_score THEN h_name ELSE v_name END, winning_pitcher_id, "AWP" FROM game_log WHERE winning_pitcher_id IS NOT NULL UNION SELECT game_id, CASE WHEN h_score < v_score THEN h_name ELSE v_name END, losing_pitcher_id, "ALP" FROM game_log WHERE losing_pitcher_id IS NOT NULL UNION SELECT game_id, CASE WHEN h_score > v_score THEN h_name ELSE v_name END, saving_pitcher_id, "ASP" FROM game_log WHERE saving_pitcher_id IS NOT NULL UNION SELECT game_id, CASE WHEN h_score > v_score THEN h_name ELSE v_name END, winning_rbi_batter_id, "AWB" FROM game_log WHERE winning_rbi_batter_id IS NOT NULL UNION SELECT game_id, v_name, v_starting_pitcher_id, "PSP" FROM game_log WHERE v_starting_pitcher_id IS NOT NULL UNION SELECT game_id, h_name, h_starting_pitcher_id, "PSP" FROM game_log WHERE h_starting_pitcher_id IS NOT NULL; """ template = """ INSERT INTO person_appearance ( game_id, team_id, person_id, appearance_type_id ) SELECT game_id, {hv}_name, {hv}_player_{num}_id, "O{num}" FROM game_log WHERE {hv}_player_{num}_id IS NOT NULL UNION SELECT game_id, {hv}_name, {hv}_player_{num}_id, "D" || CAST({hv}_player_{num}_def_pos AS INT) FROM game_log WHERE {hv}_player_{num}_id IS NOT NULL; """ run_command(c1) run_command(c2) for hv in ["h","v"]: for num in range(1,10): query_vars = { "hv": hv, "num": num } run_command(template.format(**query_vars)) print(run_query("SELECT COUNT(DISTINCT game_id) games_game FROM game")) print(run_query("SELECT COUNT(DISTINCT game_id) games_person_appearance FROM person_appearance")) q = """ SELECT pa.*, at.name, at.category FROM person_appearance pa INNER JOIN appearance_type at on at.appearance_type_id = pa.appearance_type_id WHERE PA.game_id = ( SELECT max(game_id) FROM person_appearance ) ORDER BY team_id, appearance_type_id """ run_query(q) # - # #### We've now created all normalized tables and inserted all of our data. Our last task is to remove the tables we created to import the original CSVs. # # ### Removing the Original Tables # Drop the tables we created to hold our unnormalized data: # game_log. # park_codes. # team_codes. # person_codes. show_tables() # + tables == [ "game_log", "park_codes", "team_codes", "person_codes" ] for t in tables: c = ''' DROP TABLE {} '''.format(t) run_command(c) show_tables() # - # #### In this project, we learned how to: # # - Import CSV data into a database. # - Design a normalized schema for a large, predominantly single table data set. # - Create tables that match the schema design. # - Migrate data from unnormalized tables into our normalized tables.
31,409
/dev_model.ipynb
d910b16cf80ccb81527d2e8ae738e071170d3f0a
[]
no_license
leviv/TwitterStyleTransfer
https://github.com/leviv/TwitterStyleTransfer
0
1
null
null
null
null
Jupyter Notebook
false
false
.py
4,650
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import nltk import string import numpy as np from torchtext import data from nltk.corpus import stopwords nltk.download('stopwords') # max chars in tweet TWEET_LEN = 280 trainPath = './data/tweets.train.txt' trainLabelsPath = './data/tweets.train.labels' # load vocab with open('./data/vocab.txt', 'r', encoding='UTF-8') as file: vocab = file.read().split() vocab = set(vocab) # stopwords set sw = set(stopwords.words('english')) def cleanTweet(tweet): # split into sentences tokens = tweet.lower().split() # filter out non-alphabetic, stopwords, short tokens, and tokens not in vocab tokens = list(filter(lambda x: x.isalpha() and x not in sw \ and len(x) > 1 and x in vocab, tokens)) # create new sentences tokens = ' '.join(tokens) return tokens def loadData(dataFile, labelsFile): """ Loads twitter data and it's related labels """ # first element is twitter data, second is labels data = [] for path in [dataFile, labelsFile]: with open(path, 'r', encoding='UTF-8') as file: text = file.read().split('\n') data.append(text) data = np.array(data) return data X_train, y_train = loadData(trainPath, trainLabelsPath) # clean training data X_train = np.array(list(map(cleanTweet, X_train))) # map words to integers # tokenizer = pre.text.Tokenizer() # tokenizer.fit_on_texts(X_train) # encode data # encodedTweets = tokenizer.texts_to_sequences(X_train) # X_train = pre.sequence.pad_sequences(encodedTweets, maxlen=TWEET_LEN, padding='post') # add dimension for sequence length, 1 # X_train = np.expand_dims(X_train, 1).astype(np.float32) # create embedding layer # embedding = Embedding(input_dim=len(vocab), output_dim=100, input_length=TWEET_LEN) # + TEXT = data.Field(init_token='<start>', eos_token='<eos>', tokenize='spacy', fix_length=280) LABEL = data.Field(sequential=False, unk_token=None) TEXT.build_vocab(list(vocab)) LABEL.build_vocab(y_train) n_vocab = len(TEXT.vocab.itos) # + pycharm={"name": "#%%\n"} # text = TEXT.process(["I love candy", "I'm the fucking greatest"]) TEXT.vocab.itos
2,454
/01.Polytropes.ipynb
21d6d31dca726a947ee8d0babda36ac54f27eb22
[]
no_license
gfabj/stellar_coockbook
https://github.com/gfabj/stellar_coockbook
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
73,121
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pylab as plt # %matplotlib inline from numpy import array, pi # # Polytropes # In the lecture, you considered polytropic stars where the pressure in the stellar interior is only a function of density, # \begin{align} # P = K \rho^\gamma = K \rho^{1+\frac{1}{n}}, # \label{eq:polytrope} # \end{align} # where $n$ is the polytropic index. Writing the density as $\rho(r) = \rho_\mathrm{c} \theta^n(r)$ such that $P(r) = K \rho_\mathrm{c}^{1+\frac{1}{n}} \theta^{1+n}(r) = P_\mathrm{c} \theta^{1+n}(r)$ for $0\leq\theta\leq1$, and the radius as $r=r_n\xi$ with $r_n^2 = \frac{(n+1)K}{4\pi G\rho_\mathrm{c}^{(n-1)/n}} = \frac{(n+1)P_\mathrm{c}}{4\pi G\rho_\mathrm{c}^2}$, led to the Lane-Emden equation, # \begin{align} # \frac{1}{\xi^2}\frac{\mathrm{d}}{\mathrm{d}\xi}\left(\xi^2\frac{\mathrm{d}\theta}{\mathrm{d}\xi}\right) = -\theta^n, # \label{eq:lane-emden} # \end{align} # of a polytropic star that is in hydrostatic equilibrium. # # In the core ($r=0$, i.e. $\xi=0$), we have $\theta=1$ such that $\rho_\mathrm{c}$ is indeed the central density and $\mathrm{d}\theta/\mathrm{d}\xi=0$ because $\mathrm{d}P/\mathrm{d}r=0$ for $r=0$ (cf. hydrostatic equilibrium). # # At the surface ($r=R$), we require $\rho=P=0$, i.e. $\theta=0$. In fact, the general solution of $\theta$ may cross zero several times (it may oscillate around zero) and we denote the surface as that point $\xi_1$ where $\theta$ has its first zero point to avoid zero densities inside the star. The radius of such a star is then $R=r_n \xi_1$. # #### Problem 1: # For an ideal gas, $P=\frac{k_\mathrm{B}}{\mu m_\mathrm{u}}\rho T$, what physical quantity does $\theta$ describe? *Hint:* From the above definitions, you can easily see that $\theta^n$ is directly proportional to the density $\rho$ and $\theta^{1+n}$ is proportional to pressure $P$... # # Your solutions here # $\theta$ describes the temperature of the star # We now solve the Lane-Emden equation to obtain, e.g., the interior structure of a simple (polytropic) star. In the lecture, we introduced $x=\xi$, $y=\theta$ and $z=\frac{\mathrm{d}\theta}{\mathrm{d}\xi}$ to rewrite the second-order Lane-Emden differential equation into a set of two first-order differential equations, # \begin{align} # \frac{\mathrm{d}y}{\mathrm{d}x} &= z, \\ # \frac{\mathrm{d}z}{\mathrm{d}x} &= -\frac{2}{x}z - y^n. # \end{align} # We will solve these equations via a simple shooting method that numerically integrates the equations from the centre to the surface. To this end, we discretise $x$, i.e. the radius, in small steps of length $h$. We start at some $x_0$ at which we also need $y_0$ and $z_0$ (see Problem 2). For a simple Euler step of stepsize $h$, simple integration of the two differential equations yields # \begin{align} # x_1 &\rightarrow x_0 + h, \\ # y_1 &\rightarrow y_0 + z_0 h, \\ # z_1 &\rightarrow z_0 + \left(-\frac{2}{x_0}z_0-y_0^n\right)h. \\ # \end{align} # We then continue to obtain $x_2$, $y_2$ and $z_2$ from $x_1$, $y_1$ and $z_1$, and so on until we reach the surface of the star. Such a Euler method is only accurate to first order, but we could simply use higher-order, i.e. more accurate, integrators such as the Runge-Kutta method. In our case, the simple Euler method will suffice for small enough stepsizes $h$. # #### Problem 2: # (a) What are the conditions $x_0$, $y_0$ and $z_0$ in the core of the star, and what is the stopping condition at the surface? # # (b) The core of the star is at $x=0$, but the second differential equation has a singularity at $x=0$. How can we avoid this singularity in the numerical integration? *Hint:* The proper solution would be to make a Taylor expansion of the equations about the origin but a much simpler albeit less accurate trick will do here... # # Your answer here # (a) $x_0$=0,$y_0$ =1, $z_0=0$ # # # (b) we want to stop at y=1 # # # (c) The stopping condition for x should be 1e-7 , while y>0 # # # #### Problem 3: # (a) Now, write a function that integrates the Lane-Emden equation for a polytrope of index $n$ out to a maximum $x$ of $x_\mathrm{max}=20$ with stepsize $h=10^{-3}$. The function should accept the polytropic index $n$, the stepsize $h$ and the maximum $x_\mathrm{max}$ as arguments and return lists of $x$, $y$ and $z$. # # (b) Extend your function by a linear interpolation to also return the surface values of $x$, $y$ and $z$. def polytrope(n, h=1e-3, x_max=20.0): xi = 1e-7 yi = 1 zi = 0 x = [] y = [] z = [] while yi>=0 and xi<=x_max: # compute Euler step for next point # # Your integration goes here! # xj = xi+h yj = yi+(zi*h) zj = zi+((-2*zi/xi)-yi**n)*h # add point to output array x.append(xj) y.append(yj) z.append(zj) # set new points to old points for next step xi = xj yi = yj zi = zj # get surface properties(linear interpolation) ysurf = 0.0 xsurf = x[-1] zsurf = z[-1] return (np.array(x), np.array(y), np.array(z),xsurf,ysurf,zsurf) # + #polytrope(3, h=1e-3, x_max=20.0) # - # ## Density profile of polytropes # #### Problem 4: # Solve the Lane-Emden equation for polytropic indices of $n=1.5$ and $n=3.0$ using your function from above and plot the relative density $\rho/\rho_\mathrm{c}=\theta^n$ against the relative radius $r/R=\xi/\xi_1$. Which polytrope is centrally more condensed? Do not forget to add labels to your axes. # + # Your solution here #theta^n on y and x on x x_15,y_15,z_15,xs_15,ys_15,zs_15 = polytrope(1.5) x_30,y_30,z_30,xs_30,ys_30,zs_30 = polytrope(3.0) plt.figure(figsize=(10,5)) plt.plot(x_15/xs_15, np.power(y_15,1.5),label=1.5,c='blue') plt.plot(x_30/xs_30, np.power(y_30,3.0),label=3.0,c='red') plt.title('Lane-Emden equation',fontsize=20) plt.ylabel(r'$\rho/\rho_c$',fontsize=18) plt.xlabel('$r/R$',fontsize=18) plt.legend() plt.savefig('plot1.pdf') plt.show() # - # ## Tabulated polytropic constants # In the lecture, we derived further useful quantities that we briefly summarise here for your convenience. For example, the total mass of a polytropic star is # \begin{align} # M = 4\pi r_n^3 \rho_\mathrm{c} M_n # \end{align} # with # \begin{align} # M_n = -\xi_1^2 \left. \frac{\mathrm{d}\theta}{\mathrm{d}\xi}\right\vert_{\xi=\xi_1}. # \end{align} # The central density is then related to the mean density, # \begin{align} # \rho_\mathrm{c} = D_n \frac{M}{4\pi/3R^3} = D_n \bar{\rho} # \end{align} # with # \begin{align} # D_n = -\left[\frac{3}{\xi_1} \left. \frac{\mathrm{d}\theta}{\mathrm{d}\xi}\right\vert_{\xi=\xi_1} \right]^{-1}. # \end{align} # The radius is $R=r_n \xi_1=r_n R_n$, i.e. # \begin{align} # R_n = \xi_1. # \end{align} # We also arrived at a mass-radius relation, # \begin{align} # \frac{R}{R_\mathrm{n}} = \left(\frac{1}{4\pi}\right)^{\frac{1}{3-n}} \left(\frac{(n+1)K}{G}\right)^{\frac{n}{3-n}} \left(\frac{M_\mathrm{n}}{M}\right)^{\frac{n-1}{3-n}}, # \end{align} # which we used to show that the central pressure of a star can be written as # \begin{align} # P_\mathrm{c} = B_n \left(\frac{4\pi}{3}\right)^{1/3} G M^{2/3} \rho_\mathrm{c}^{4/3}, # \end{align} # where # \begin{align} # B_n = \frac{3^{1/n}}{n+1} M_n^{\frac{1-n}{n}} R_n^{\frac{n-3}{n}} D_n^{\frac{3-n}{3n}}. # \end{align} # The equation that relates the central pressure and the density of stars of different mass is particularly remarkable, because one can understand almost all of stellar evolution from this equation alone. In fact, you can derive this equation more easily from homology or the equation of hydrostatic equilibrium. Later in the course, you will understand better why this equation is so important for our understanding of stellar evolution. # #### Student-report part 1: # Solve the Lane-Emden equation for $n=1.0$, $1.5$, $2.0$, $2.5$, $3.0$ and $3.5$, compute the dimensionless parameters $D_n$, $M_n$, $R_n$ and $B_n$ (see summary above) and put them in a table in your report. What is the value of $M_3$? # + # polytopic indices ns=np.arange(1.0, 4.0, 0.5) ns.any() nt = np.array([1.0,1.5,2.0,2.5,3.0,3.5]) # Your solution here def polx(n, h=1e-3, x_max=20.0): xi = 1e-7 yi = 1 zi = 0 x = [] y = [] z = [] while yi>=0 and xi<=x_max: # compute Euler step for next point #np.logical_and(x > -2, x < 2) # Your integration goes here! # xj = xi+h yj = yi+(zi*h) zj = zi+((-2*zi/xi)-yi**n)*h # add point to output array x.append(xj) y.append(yj) z.append(zj) # set new points to old points for next step xi = xj yi = yj zi = zj # get surface properties(linear interpolation) ysurf = 0.0 xsurf = x[-1] zsurf = z[-1] return (xsurf) def polz(n, h=1e-3, x_max=20.0): xi = 1e-7 yi = 1 zi = 0 x = [] y = [] z = [] while yi>=0 and xi<=x_max: # compute Euler step for next point # # Your integration goes here! # xj = xi+h yj = yi+(zi*h) zj = zi+((-2*zi/xi)-yi**n)*h # add point to output array x.append(xj) y.append(yj) z.append(zj) # set new points to old points for next step xi = xj yi = yj zi = zj # get surface properties(linear interpolation) ysurf = 0.0 xsurf = x[-1] zsurf = z[-1] return (zsurf) # + ### Define polytropic quantities Dn, Mn, def Dn(n,polx,polz): dn= -((3/polx(n))*polz(n))**(-1) return dn D1,D15,D2,D25,D3,D35= Dn(1.0,polx,polz),Dn(1.5,polx,polz),Dn(2,polx,polz),Dn(2.5,polx,polz),Dn(3,polx,polz),Dn(3.5,polx,polz) D=array([D1,D15,D2,D25,D3,D35]) ######### def Mn(n,polx,polz): mn= -(polx(n)**2)*polz(n) return mn M1,M15,M2,M25,M3,M35= Mn(1.0,polx,polz),Mn(1.5,polx,polz),Mn(2,polx,polz),Mn(2.5,polx,polz),Mn(3,polx,polz),Mn(3.5,polx,polz) M=array([M1,M15,M2,M25,M3,M35]) ########## R1,R15,R2,R25,R3,R35 = polx(1.0),polx(1.5),polx(2.0),polx(2.5),polx(3.0),polx(3.5) R = array([R1,R15,R2,R25,R3,R35]) ######### def Bn(n,mn,rn,dn): bn = 3**(1/n)/(n+1)*mn**(1-n/n)*rn**(n-3/n)*dn**((3-n)*3*n) return bn N = array([1.0,1.5,2.0,2.5,3.0,3.5]) B = Bn(N,M,R,D) print('n=', N) print('Dn=', D) print('Mn=', M) print('Rn=', R) print('Bn=', B) # - # White dwarfs are a special type of stars in which gravity is balanced by the quantum-mechanical degeneracy pressure of electrons (Pauli principle). Such objects are the end products of some stars including our Sun. # # Later in the lecture, we will show that a fully degenerate electron gas produces a pressure # \begin{align} # P_\mathrm{e} = \frac{h^2}{20 m_\mathrm{e}} \left(\frac{3}{\pi}\right)^{2/3} \frac{1}{m_\mathrm{H}^{5/3}} \left(\frac{\rho}{\mu_\mathrm{e}}\right)^{5/3}, # \end{align} # where $\mu_\mathrm{e}$ is the mean molecular weight per free electron, which is $\mu_\mathrm{e}=2.0$ for a star made of only helium, carbon, oxygen etc. and is $\mu_\mathrm{e}=2.15$ for a pure iron gas. In terms of the atomic mass unit $m_\mathrm{u}$, the hydrogen atom has a mass of $m_\mathrm{H}\approx 1.00784 m_\mathrm{u}$. # # If the electrons are fully relativistic, the degeneracy pressure takes the form # \begin{align} # P_\mathrm{e,r} = \frac{h c}{8} \left(\frac{3}{\pi}\right)^{1/3} \frac{1}{m_\mathrm{H}^{4/3}} \left(\frac{\rho}{\mu_\mathrm{e}}\right)^{4/3}. # \end{align} # This implies that white dwarfs are well described by a polytrope of index $n=1.5$ ($\gamma=1+\frac{1}{n}=5/3$) and, if the electrons are fully relativistic, by a polytrope of index $n=3.0$ ($\gamma=4/3$). # # Mass-radius relation of white dwarfs # #### Student-report part 2: # Use the tabulated values of $R_n$ and $M_n$ of a $n=1.5$ polytrope and the constant $K$ from electron degeneracy pressure to plot the mass-radius relation of white dwarfs for masses $0.01\text{--}2.0\,\mathrm{M}_\odot$. Has a more massive white dwarf a smaller or a larger radius than a less massive white dwarf? # # *Hint:* The Python package `scipy.constants` has many natural [constants](https://docs.scipy.org/doc/scipy/reference/constants.html) in SI units. However, it does not provide commonly-used astronomical constants. For example, loading the module via `import scipy.constants as const`, you can access Planck's constant via `const.h`. # + import scipy.constants as const M_sun = 1.98847542e30 # in kg R_sun = 695700000.0 # in m Mwd = np.linspace(0.01, 2.0, 1000) # a useful range of WD masses mu = 1.6605390666e-27 #in kg Rwd = R15*((1/(4*pi))**(1/(1.5)))*(2.5/const.G)*(M15/(Mwd*M_sun))**(0.5/1.5)/R_sun # Your solution here bsfont = {'fontname':'Cambria Math'} plt.figure(figsize=(10,7)) plt.plot(Mwd,Rwd,c='blue') plt.xlabel(r'$M/M_{\odot}$',fontsize=18,**bsfont) plt.ylabel(r'$R/R_{\odot}$',fontsize=18,**bsfont) plt.title('White Dwarf Mass-Radius relation',fontsize=20,**bsfont) plt.grid() # - # Question: More massive white dwarf has smaller radius # This mass-radius relation is not completely accurate because, for higher masses, there will be some contribution from relativistic degenerate electrons such that the white dwarfs cannot be solely described as a $n=1.5$ polytrope anymore. # # For example, on [Wikipedia](https://en.wikipedia.org/wiki/White_dwarf) there is a more complete graph: # <img src="ChandrasekharLimitGraph.svg" alt="WD mass-radius relation" width="500"/> # # Chandrasekhar mass # The mean density of white dwarfs described by a $n=1.5$ polytrope increases as $\bar{\rho}\propto M^2$ (why?). For increasing mass $M$, the mean density increases and electrons become relativistic (why?). # # A fully relativistic, degenerate electron gas is well described by a $n=3.0$ polytrope. As we have seen in the lecture, there is only one mass $M$ for which such a star can be in hydrostatic equilibrium. This mass is called the Chandrasekhar mass. If a white dwarf exceeds this limit, it will collapse and produce a Type Ia supernova. We will hear more about these stars later in the lecture. # #### Student-report part 3: # Use the tabulated value $M_3$ and the polytropic constant $K$ from above to show that the Chandrasekhar mass is $M_\mathrm{Ch}\approx5.83\,\mu_\mathrm{e}^{-2}\,\mathrm{M}_\odot$. Compute the Chandrasekhar mass for a white dwarf made of carbon and oxygen, and of a white dwarf made of iron. The latter mass limit is relevant for core-collapse supernovae as we will see later in this course. For which composition do we have $\mu_\mathrm{e}=1.0$ and hence a Chandrasekhar mass of $5.83\,\mathrm{M}_\odot$? # Solution: # # (a) for a degenerate non-relativistic electron gas, $R \propto M^{-1/3}$. Combining it with $\rho \propto MR^{-3}$ we obtain $\bar \rho \propto M^2$ # # (b) The mean density increases and electrons become relativistic due to Pauli exclusion principle (electrons are pushed to higher momenta) # # (c) For n=3, the mass-radius relation becomes: # # $M = 4\pi M_{3}\left(\frac{K}{\pi G}\right)^{3/2}$ # # plugging in $M_{3}=2.0170$ # # (d) the general equation for $\mu_e$ is: # # \begin{equation} # \mu_e = \left(\sum_{i}\frac{X_i Z_i}{A_i}\right)^{-1} # \end{equation} # # where A is the molecular weight and Z is the charge. For Hydrogen, $\mu_e = 1$, for Oxygen, Helium, and Carbon $\mu_e = 2$, and for iron $\mu_e = 56/26$ # # The resulting Chandrasekhar masses $M_\mathrm{Ch}$ are: # # (e) $\mu_e = 1$ is valid for Hydrogen composition since $A/Z=1$ # For this and other seminal contributions to the theory of stars, Subrahmanyan Chandrasekhar received a Nobel Prize in Physics in 1983. By the way: he did much of this work on a boat trip from India to the UK. def Chandra(me): m = 5.83*me**(-2) return m print ('chandra O', Mch(2)) print ('chandra F', Mch(56/26))
16,264
/ProjectCovid/Untitled.ipynb
8ac8464334c5ec965c9774ec5bc897b89e54e069
[]
no_license
Crazy-Cat10/Machine-Learning
https://github.com/Crazy-Cat10/Machine-Learning
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
101,549
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Import all the tools we need # Regular EDA (exploratory data analysis) and plotting libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # we want our plots to appear inside the notebook # %matplotlib inline # Models from Scikit-Learn from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import GaussianNB # Model Evaluations from sklearn.model_selection import train_test_split, cross_val_score from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import confusion_matrix, classification_report from sklearn.metrics import precision_score, recall_score, f1_score from sklearn.metrics import plot_roc_curve # - df = pd.read_csv("train.csv") df.head(7).T df.isna().sum() df_tmp=df.copy() # This will turn all of the string value into category values for label, content in df_tmp.items(): if pd.api.types.is_string_dtype(content): df_tmp[label] = content.astype("category").cat.as_ordered() df_tmp.info() # Turn categorical variables into numbers and fill missing for label, content in df_tmp.items(): if not pd.api.types.is_numeric_dtype(content): # Turn categories into numbers and add +1 df_tmp[label] = pd.Categorical(content).codes+1 df_tmp.info() X_train=df_tmp.drop("Attrition",axis=1) y_train=df_tmp["Attrition"] df_test = pd.read_csv("test.csv") # This will turn all of the string value into category values for label, content in df_test.items(): if pd.api.types.is_string_dtype(content): df_test[label] = content.astype("category").cat.as_ordered() # Turn categorical variables into numbers and fill missing for label, content in df_test.items(): if not pd.api.types.is_numeric_dtype(content): # Turn categories into numbers and add +1 df_test[label] = pd.Categorical(content).codes+1 X_test=df_test # + # Put models in a dictionary models = {"Logistic Regression": LogisticRegression(max_iter=400), "KNN": KNeighborsClassifier(), "Random Forest": RandomForestClassifier() ,"Naive-Bayes":GaussianNB()} # Create a function to fit and score models def fit_and_score(models, X_train, X_test, y_train,y_test): """ Fits and evaluates given machine learning models. models : a dict of differetn Scikit-Learn machine learning models X_train : training data (no labels) X_test : testing data (no labels) y_train : training labels """ # Set random seed np.random.seed(42) # Make a dictionary to keep model scores model_scores = {} # Loop through models for name, model in models.items(): # Fit the model to the data model.fit(X_train, y_train) # Evaluate the model and append its score to model_scores model_scores[name] = model.score(X_test, y_test) return model_scores # - # + X=df_tmp.drop("Attrition",axis=1) y=df_tmp["Attrition"] np.random.seed(42) # Split into train & test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) from sklearn import preprocessing X_train = preprocessing.scale(X_train) X_test= preprocessing.scale(X_test) fit_and_score(models,X_train,X_test,y_train,y_test) # - X_tester=df_test X_tester gs_log_reg.best_params_ clf = LogisticRegression(C=0.01610262027560939, solver="liblinear") # Cross-validated accuracy cv_acc = cross_val_score(clf, X, y, cv=5, scoring="accuracy") cv_acc # + # Create the parameter grid based on the results of random search param_grid = { 'bootstrap': [True], 'max_depth': [80, 90, 100, 110], 'max_features': [2, 3], 'min_samples_leaf': [1,2,3, 4, 5], 'min_samples_split': [8, 10, 12], 'n_estimators': [1000,1500] } # Create a based model rf = RandomForestClassifier() # Instantiate the grid search model grid_search = GridSearchCV(estimator = rf, param_grid = param_grid, cv = 3, n_jobs = -1, verbose = 2) grid_search.fit(X_train,y_train) # - grid_search.best_params_ glf=RandomForestClassifier(bootstrap= True, max_depth=80, max_features= 2, min_samples_leaf= 1, min_samples_split= 8, n_estimators= 1000) # Cross-validated accuracy cv_acc = cross_val_score(glf, X, y, cv=5, scoring="accuracy") cv_acc cv_acc = np.mean(cv_acc) cv_acc ideal_model=RandomForestClassifier(bootstrap= True, max_depth=80, max_features= 2, min_samples_leaf= 1, min_samples_split= 8, n_estimators= 1000) X=df_tmp.drop("Attrition",axis=1) y=df_tmp["Attrition"] X = preprocessing.scale(X) ideal_model.fit(X,y) # Cross-validated accuracy cv_acc = cross_val_score(clf, X, y, cv=5, scoring="accuracy") cv_acc cv_acc = np.mean(cv_acc) cv_acc X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) X_train = preprocessing.scale(X_train) X_test= preprocessing.scale(X_test) ideal_model.fit(X_train,y_train) ideal_model.score(X_test,y_test) ideal_model=RandomForestClassifier(bootstrap= True, max_depth=80, max_features= 2, min_samples_leaf= 1, min_samples_split= 8, n_estimators= 1000) X=df_tmp.drop("Attrition",axis=1) y=df_tmp["Attrition"] X = preprocessing.scale(X) ideal_model.fit(X,y) # Cross-validated accuracy cv_acc = cross_val_score(clf, X, y, cv=5, scoring="accuracy") cv_acc cv_acc = np.mean(cv_acc) cv_acc X_tester=df_test # Make predictions with tuned model y_preds = ideal_model.predict(X_tester) # Plot ROC curve and calculate and calculate AUC metric plot_roc_curve(ideal_model, X_tester, y_preds) y_preds y.head(100) ytest = ideal_model.predict_proba(X_tester)[:,1] ytest X_tester['Id']=X_tester['Id'] + 1 X_tester X_tester.drop["Id"] # + df_test = pd.read_csv("test.csv") dfd=df_test.copy() # This will turn all of the string value into category values for label, content in df_test.items(): if pd.api.types.is_string_dtype(content): df_test[label] = content.astype("category").cat.as_ordered() # Turn categorical variables into numbers and fill missing for label, content in df_test.items(): if not pd.api.types.is_numeric_dtype(content): # Turn categories into numbers and add +1 df_test[label] = pd.Categorical(content).codes+1 df_test=preprocessing.scale(df_test) y_pred=ideal_model.predict_proba(df_test) #data=pd.DataFrame(y_pred) data=y_pred[:,1] #data output = pd.DataFrame({'Id': dfd.Id, 'Attrition': data}) output.to_csv('sample_submission.csv', index=False) output # - df_test
7,469
/spring1718_assignment2_v2/TensorFlow_bak.ipynb
fad9ad0e1d4df6f3f2eda118cb78d7fd9b1ce311
[]
no_license
imayuxiang/cs231n
https://github.com/imayuxiang/cs231n
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
369,421
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/GitHub-Bong/Toxic-Comment-Challenge/blob/master/0408_translated_to_various_language.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="M7uYd2oUCiMj" # # Mount, Import # + colab={"base_uri": "https://localhost:8080/"} id="85Ku_oY5CLBq" outputId="92e4edd6-711b-4e05-d28b-c93f4e717617" from google.colab import drive drive.mount('/content/drive') # + id="wDxTdEjmCpNk" import tensorflow as tf from tensorflow import keras import sys, os, re, string, csv, codecs, numpy as np, pandas as pd import matplotlib.pyplot as plt # %matplotlib inline from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation, SpatialDropout1D, concatenate, add from tensorflow.keras.layers import Conv1D, GRU, Bidirectional, GlobalMaxPooling1D, GlobalAveragePooling1D from tensorflow.keras.models import Model, load_model from tensorflow.keras import initializers, regularizers, constraints, optimizers, layers # + [markdown] id="1iOXw8OeCwAS" # ---- # + [markdown] id="rs041I--CxFb" # # Load Data # + [markdown] id="SWSth5FNC1Er" # __현재__ # # '1-pre' : 1차 전처리 거친 데이터 길이 0인 애들은 기존 데이터로 대체 # # # '2-trans' : 1-pre 에서 길이가 0이 된 애들 번역한 데이터 # + id="i8oGsp0nCvHC" train = pd.read_csv('/content/drive/Shareddrives/SOGANG Parrot/Ho-colab/0407-add-len-translated_train.csv') # + id="2U2oeEiivEtJ" test = pd.read_csv('/content/drive/Shareddrives/SOGANG Parrot/Ho-colab/0407-add-len-translated_test.csv') # + colab={"base_uri": "https://localhost:8080/", "height": 207} id="3a4WdW9cC0JL" outputId="5ae19cb5-40a2-4798-e5b5-fc2e1ab6bcba" train.head(2) # + colab={"base_uri": "https://localhost:8080/", "height": 106} id="gD3MdynmC7KK" outputId="e133c937-4ca7-4b0d-eda5-da4b12d36c13" test.head(2) # + [markdown] id="E_qrdNNDE4bK" # # Extract Toxic Data # + colab={"base_uri": "https://localhost:8080/", "height": 190} id="4D7HbdjNC70S" outputId="1533e2ff-0b3b-45f7-80b5-44d4130d3371" toxic = train[train.iloc[:, 2:8].sum(1) != 0] toxic.head(2) # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="Jb0gUj0nKCgj" outputId="e599b93a-27d0-4bd1-83fd-06985e1ced2d" toxic['3-trans'] = 'toxic' toxic.head(2) # + [markdown] id="q8dH71KfF2EE" # # Translate Toxic Comment # + [markdown] id="HwcIEtNiOyah" # ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAjgAAAI7CAYAAADh4eFYAAAgAElEQVR4Aey9DWscR772fX++Rp9gQGAwMVkExiImg4XIOsJYm9te7cazrHX8YJkTT4hnl5XCZm5z1gNhhBlDGIEZg5kQJPbIhyyT7E7WYWx2DAf+D1Xd1V1d/aIeaexVt34GI830S/3rV1dVXfXSrf8j/IMABCAAAQhAAAIVI/B/KpYfsgMBCEAAAhCAAAQEg4MIIAABCEAAAhCoHAEMTuWKlAxBAAIQgAAEIIDBQQMQgAAEIAABCFSOAAanckVKhiAAAQhAAAIQwOCgAQhAAAIQgAAEKkcAg1O5IiVDEIAABCAAAQhgcNAABCAAAQhAAAKVI4DBqVyRkiEIQAACEIAABDA4aAACEIAABCAAgcoRwOBUrkjJEAQgAAEIQAAChQzOm1ev5FXw/82xmP0k/a2rcn7xF/Lp49Ex7nDS64+RJJdAAAIQgAAEIFBaAjkG55W82L0rV99bkIUF6//iL+TGFz158a8Z8vz3rlwP79GS4QyX6lNPev0M6Q2/iPJ6ffenGa7kVAhAAAIQgAAETguBDIMzku6vF+PGJjQogQH44DMZ/Fw0GyPp/fa8LCycl+v/70BmnwU66fVF4xTB4BRnxZkQgAAEIACB00ogxeC8kYM/XLbMzXm5+vvPZPtP27K9dUN+sRjNcCz+uidVm+PA4JxWqRIXBCAAAQhAoDiBpMH5uSefhrM1F+Wzwav43X54ZC03LcpnA38+5uDPK7Ky6v9vf/dGDv7sm6HzfzoQ+akvd4NjK6ttObDu+NNgWz5d+YUsLpyXy7/dluHPIvF7Sfr19j3/v7789K8X0t26qg3Y4tKKfPqnoTiRy6v/7knrtytyWS+7LcovVq7L7T8P5Kf/jQLC4EQs+A0CEIAABCBQVgIJg/Pqm9vR7M0vH0naluDhF9Hy1eIffLtiG4O7//mZLBqT9MVQJGMPzWj3RnSeOf+DlrT+I5olan0r6dfb9/xVS1q/iq4xe4YufzEMl8PefNuSyyYN9+cHLRkGe4rsfLAHp6yyJm4IQAACEDjrBBIG5+APkXlZUOYk5d9Pj29EJuhXXb1MZRsDZTDULIqe0fnzQbpBeTOQz6zlrvO/uquXwe6uWekvLEghg6MMy3vX5e6fPpNPV+zrb0hPr6EdyPZFY4CuSmvvhfz0w1Da1j4jY2bsfJjvUhDwFQQgAAEIQAACp5hAwuDYHfzVTtr8jYh828o1OIu/7sZnfuzZloXgKarnn0X3uNiSodl5/GYordCMFDU4n0ov3PBsm5ngenkTPub+6pVJSOTFn1eiGAIzZ+cfg3OKlUtoEIAABCAAgRwCCYNz8OXFRKfvXh+bwQk2GucagxSD89Pu9SidrUG4lCTyRgZbZralqMGxHz3/SbrWcpWeAVIZ+N9X8mLvkXz2++t6ZsneLK2XtDA4bjHzGQIQgAAEIFBaAgmDE9uDs5q2B+eNDP4zWga6+GVyD05i5uMog9OML4UNm/M2OAfSji1dLcj5D67LjbWkmcs1aqUtZgKHAAQgAAEInC0CCYMjsaeoFuXGbnyZKr5Z96JsB49E5RqDFIPz5undaAbn4rb1ZFXKElPK9Vkbl0WSMzh2Whe3+uFTU7FZJGZwzpbyyS0EIAABCFSaQNLgiMjBl/Z7cBblF//X3wD82e+vynnrCaTF3/fDR7FnNThxI7Ugl//jkfQHPWn/1k57PktUtpFZ+X8v/AL9X+dlhhicSgudzEEAAhCAwNkikGpwREbS/4+40TCPXpufiyvRo9UK2cwGR0RSHxNfvC23fzvnJar/bsceET//wWVt1C5eZInqbMmd3EIAAhCAwFkhkGFw/Ozrl/B9oP7EQmQ4Fpeuyt3OMFzmMaCOY3DUtT8NHsnd3/qPlF//fVsGP8XNUrHHxI/eZDz65q6s2I+l//aRvNizngZjBscUJT8hAAEIQAACpSeQa3DC3L0J/pr4LH9gM7w465f0R7dTn6LKusUxvtd/GX2u+ThGEFwCAQhAAAIQgMBbJVDM4LyNEF715baZGVq8Kq0nQ3nxwwsZdG5Hy0mLt6Xv/r2FtxEL94QABCAAAQhAoFIE/n0GR+3BefxpbNOyvRS2sHBZbn8Tf4KrUuTJDAQgAAEIQAACb43Av9XgqFy9+ltfHm19Gv6hzpXV63L7T10ZVu3PlL+1IuTGEIAABCAAAQi4BP7tBscNiM8QgAAEIAABCEDgpAQwOCclyPUQgAAEIAABCJw6AhicU1ckBAQBCEAAAhCAwEkJYHBOSpDrIQABCEAAAhA4dQQwOKeuSAgIAhCAAAQgAIGTEsDgnJQg10MAAhCAAAQgcOoIYHBOXZEQEAQgAAEIQAACJyWAwTkpQa6HAAQgAAEIQODUEcDgnLoiISAIQAACEIAABE5KAINzUoJcDwEIQAACEIDAqSOAwTl1RUJAEIAABCAAAQiclAAG56QEuR4CEIAABCAAgVNHAINz6oqEgCAAAQhAAAIQOCkBDM5JCXI9BCAAAQhAAAKnjgAG59QVCQFBAAIQgAAEIHBSAhickxLkeghAAAIQgAAETh0BDM6pKxICggAEIAABCEDgpAQwOCclyPUQgAAEIAABCJw6AhicU1ckBAQBCEAAAhCAwEkJYHBOSpDrIQABCEAAAhA4dQQwOKeuSAgIAhCAAAQgAIGTEsDgnJQg10MAAhCAAAQgcOoIYHBOXZEQEAQgAAEIQAACJyWAwTkpQa6HAAQgAAEIQODUEcDgnLoiISAIQAACEIAABE5KAINzUoJcDwEIQAACEIDAqSOAwTl1RUJAEIAABCAAAQiclAAG56QEuR4CEIAABCAAgVNHINPgTKdv5OXLn+Uf/xjzHwZoAA2gATSABtBAqTSQanBev34t//znS3n16rVMp1P+wwANoAE0gAbQABoolQYSBkfN3Chzg7HB2KEBNIAG0AAaQANl1UDC4KhlKWZuEHRZBU3caBcNoAE0gAaUBhIGR+25QRyIAw2gATSABtAAGiizBjA4rKliaNEAGkADaAANVE4DGBxEXTlRl3nEQeyMmNEAGkAD89EABgeDg8FBA2gADaABNFA5DWBwEHXlRM3oZz6jHzjCEQ2ggTJrAIODwcHgoAE0gAbQABqonAYwOIi6cqIu84iD2BkxowE0gAbmowEMDgYHg4MG0AAaQANooHIawOAg6sqJmtHPfEY/cIQjGkADZdYABgeDg8FBA2gADaABNFA5DWBwEHXlRF3mEQexM2JGA2gADcxHAxgcDA4GBw2gATSABtBA5TSAwUHUlRM1o5/5jH7gCEc0gAbKrAEMDgYHg4MG0AAaQANooHIawOAg6sqJuswjDmJnxIwG0AAamI8GMDgYHAwOGkADaAANoIHKaQCDg6grJ2pGP/MZ/cARjmgADZRZAxgcDA4GBw2gATSABtBA5TSAwUHUlRN1mUccxM6IGQ2gATQwHw1gcDA4GBw0gAbQABpAA5XTAAYHUVdO1Ix+5jP6gSMc0QAaKLMGMDgYHAwOGkADaAANoIHKaQCDg6grJ+oyjziInREzGkADaGA+GsDgYHDOoMEZy/BJT/Z/nE8lojGC47w1MHnRl97eoUzOQvv0cizjsf//TOT3WGVKm3WcOvZuDM6zpnied+T/tc6oWGf7fUfWPE+az2hYj1PouddMVEMzKVYO04lumCav51gOQdkW1kKRxsLRy2Rv09fi/UHBfM4xf0Xi5ZwTlcuosyaetyad7+dXboP7nnjXOjIyZaPbtBnTcHSYXQ8Ppb2q2sv4/SfKBEzml6fs9N9NGuPvOrK5ei7eL9TqsvFwKGPDeR4/A+7JPuicLN9syeBYA52RdK554oVtiPu5IMOCOkpvs95C+zsP3qfoHu/G4Py4L/29fvT/6y2pe55sfGl9t9eX4fcFO9bCDUVBkZ2iAvl3Nzp+59CUQREmb6Mcgnu+TYMznY5l8LB9zIYNTf27NXpU+uU3OFOZHHSlvbtvzeAMpOl5Mtd6UaSOv6VzDh83ZNnz5Ny1pnT29mU0HsvooC+d+2tyTn1/syuH80o7aFPq97pRH6T6o90d2bjkiXepKcOZjaNraNzPBduJggYntc16G+3vvJifkvu8G4PjZvakBXPS6914+ByOmM+GwSnY+KCLUBdHmYrTdLwKBifJs0IG5/uurNc8Wb4/SJ2pGT9tavMzNzMX9Bep9/trW1Y9Tza/KTi4DtsE19C4nwu2MYUNTsr96AePbJ9OpcEZf9eV5s26XKh54p1flrVbO9IfWQWcUbCHXzekfmVdmk/HUcbH+9K9vyH192vi1S5I/WZTut9Zx0PBWvc33xW6diz7u03Z+PCC1NTI49KaNLb7MrKXbUY92bxSl53nYxlub8jyeWe6u1A6bnwj6d2uS317KNMXXdm8tuyPfC5tSDM28guuez2S/nZD1i6pKeGaXPhwQ1p79pLgUHau1H1O3jlZVr9f2ZSezd1wmU5luF2XephndW5dNh9b9zsyPTc/weewMTqU8dOWbJh4P2rIjl2uViyjvR1phPlP4e/qJSwPJ4bccphI/15d6jc7KSPLkXRv1aV+rx+NuGP5PyfL1xqyE+PtpK3yc9CW9St1aT5NaWyft7S22wfRdYl6cqcj++PouOokdTnd7kVLKwG30eNNqV/ZkWHw2Zx3qLT0ka/lo5aAj0p/8s2W1K9sSS8W00QGnyu9NGUQGzXHGZp4RuOh7ARtQe39VWlsp3eKtiEIDc6LsQweBPWtdkFWb+0kZu3CdCw9qXu5fAotUcXKPKhjtmYtHR4+aVra3pSO0ybZcfmx+PW79r5f13S9d2K2GaT+butb1fFrm9I5SGrtyPoUpGuXv4pr4343ob+0OIZ/XBKvtin9l3GtRudOZHB/SbylVkKfx9HDNGxTrPYpZOcYx6y2Yeq3j1Eb5xoa97OfN5uR7stUHbXzbQzOi7EMHzZkVfVTqmxutuJ9nhOX1kde+xvmL4vx2fj+1Bmc0dfr2iioAu486Uv/SUe2PlLmpCE9s1ZqNRSmUvhTnsvSeHwYmZuXA2le8qT24Ybs7KrlsK7s/KYuNW9Zms+SFdvcS/8MrvXOr0mz08u4diTdT5Qgl2XjQUd6e33pdbZkteZJ7VYvGp2YCnZtTc6tNqTV6UqnEzTUx44xqFA3G9J434rx5rI2MOtf25X5UDoqzjAvvXAqWI2i/I19Y9nf60v3Xl08b0N29JLiUEaxTiiqFOODvvSdpcZoibFIetG9YtwDVvVP1mU5jNeUW03WO1b5Tqdy2FF6OSdr9w3/pqyd96T2iWVEXL24n1VjUKAcJt+ovTur0v6rE3swCmw8NsY5K//J+GN5nx5K+4ontbuWUdIN1UT6d2uxBn/yTI1ya1L/zY50zXT7h6qerEvnRRRfomMOGj53pk6fd2VN1i5dkPU7benutqXn5tNqNAulH3COj44H0lQDF68W30M37knD88Qw1PF83JDGh3XZ2FZLCz3p3F3124bPh5GRtGIyLP281WX9k2V/CUS1I2o54qR8cvfgBG3B+TVp6bamL93PVby1aHYg4NG4sykX1NKMHZejK7vcJt8Ppb+3IxueJ+Eyy4HRWlTWJv+pPyfDoC3clLZKd68n7d+ptiKu50L1aTqVzPK/1JSB3YEnyifQeLh3JSP+Z02peUvSGvrHT6KHXIPzvClLthbT2gadB8cITV1D436eyuR5UEdvt3Xf0H/SloZaElttR4OkwOBs3lkL63Kv47dhMRPoxJXf/mYwTZRF9c87ZQZnLPtfN6XxwHS6QQG89Bu/sNN2CjvV3EyDTiFR4SYyuBcfHSQbhOxr+3esjkaPiBrScszS5ElDPG9dumb2I4g31ulqsWWnc3SMQYVSGxGtDm06PfQ3v30cdfCHD1f9DYux86Yy2m1os2caEcXB7fiSbKxK4ZSDOXeW9Mw14c/gnrHKHbAa3F/2ja6ZEdDGIsU0vPA3oW/sBp2AG6f7uahWJn3Zqnmy+tAxWYpvbUv6gRnU+XeMhsrf4V/U5tcN6RqjntLg+NdG99Jc3HSDz0v3nHryeigtZegtc213lCHjlHLW5xUx/irmwun7Gq3ZndmwJUvemqxd82Tpj8NwMOKbx4iNH8+SbMVms4K6623l7hHzNexJ7Y5jFI2J/V00+JiJT57BCTS39dSqH9OJDB9uSdvM3BltX4vqpi6Tl33ZrMV5JONyO1g7nQK/P91KaQNG0nvQima0i9anoPyjwVGQ/suBbC15svQgKldbc/7vBfMRsDLm+CR6MAYnNIfBftDew02p187J2rZlmBNtg2Hrxu0aGvfzVAb3PPGsdljn//uetB5YM13a4HiSqMvuoCktrrTvUtqUZBmYPFX/5ykzOFnA92XnirVj3SrY8Tdbsuw5MzdBI7yZtbb617bUrdFBQgCTvqhrU9dsizxldLCjN1GHU/xBvKayhukF6SS+V/EfFaMZQVidmbmv35GajcJ5I6ahtFSDZHU0Jzc4s6VnYg5/BqzsmNxjhtfhV3XxrlijobByT6R/xxPPsLH0ou/lfp6hHAb3a06a/hMv0ayLn//6V3ETpNMN0jGzFGG+wrinYhpjk0d1zmRvS2rWSNs3A/EnbMy9xo+VuW6Ey0LJjtKvY2456/NSWSbr5Czp6zKylhr051s9Gaonnaz0NFfLQOh4rOtM/qamo855QsrPWzT6D68Njd1maEZn4mPFNzVLCyaOUVfWVZvhmF87bVO2yfIPOsd70VN9ybjcDjZZLrG0bE2p35/7MyJbe9kzP0Xrk1/+EUM7Xbe87WP69x+7eiYqtW2NxRzP70n0YLgnn6LypPbRprQzlhHjscfjmZr2NzTvSYMz/FwNhrekbwZksfwF5ad1lKbVIL2/BO2I22ape6V9l5bGGf7uVBocNSXb6+xI615D1q7U/T0r6jFzI6agYDfuKnNjdWR2QQbnnLsUrFnrPSXB78HaZWhA7OvU7zMJZyKjYU862y3ZurUm9Sv+WrmqTOH9s+53khgTFSxq8OKdl1sxo/NUBdajjE+64R6N+LXxc+MVPovTbOll3TPZCahY4vfWjZ7aV2WXbfB7bJ+Tyz/jcyGt6BmIerRMlTCifozhXolYbL428hv3sXRveuKFsw+BWbvZDZc8c8tIzyTUpPncL7tkR+l/794j67xE+YQmwRhoRyNO+nGj7ncCOv96hGpMmm+07ZmxzHhcY+HW3TC+yOTF8uBcn5XOkXyc+0yn/t4itQ9P7xW615bu8FBir1BwdRfGHnSOloFKxhXXfixP4X2csoh9fyjdm/4j2ecubcjmdkf6B+PYUp9Os0B98tmYfXpO+6r3kGRoQ8dTMB8BK2P0kzyCvCbKIYVBcK9EvXt5KP1gGTFsbzLLyI3bNTTu56neG7mh9lvqPTWbstPpy/6PztaIzPj99Nw+L+xTFMvMWFMYxLRwdo6fMoMTNRK6Ej5oyU6nJ4ODobTtdw4EBevVVmXrrtqDkbJMEZyzeqslrQfp/zP3GBQVjpryVuv6gYBbD3ak82Qg+8N2/D09Wfc7SYzzMjjO+z3chj23IU3Nl9sQxCtTZkNlKmBwT9OwxdP37206Qn2vpXXZyijf1sNgn5MbZ8bnYlqJz3olR6x+jEufbGXqrv0sexSt8hubhQlmfcLltrADz+hEnLxl8XbLOeu8OH+/LN1rY+c46U+nPi89o6X32Rhz6M906Y7FNY3KeDu6DNPI7BAinfnxpc8w+DMv0b6TrHTcPCbOy4hjfNCTthqYBYMote9tZxh0agk2JuZ3YXBUWhMZPevIzp0NqevN+2p/4qb0gqVrnccC9clnsyqNrHr3oBftMTH1Ovzpl3tk4A0D56feg2O0cjI9GCOQMDg6JjWAsGZlM8vIbddcQ+N+DvIzGcmgsyObN81AvSb12xafDB2ZwRwGx9FFqKNi358ug2Omec20XJiZQ+l8nJzBaTxRDYfaca82yy1L87nljoNNi+kd5RFwnA2PYeMaxuNfrzZE65dxOXtbpsF6fOi2syrNSWIsbHDiHXI8LwFXa2rcbdjj5zvcUvM1W3qJ+wf3TF3iCXiZvVj6aQxrmSNxL1Nebpzu5xnLQe+T0csnVudt0rI79PA7h9tR3wd7HFTn75udeGcdM0DuvcwSjt1pufsAUkxSogN372t9niV9VSa6nNS+E7VJ21p20t/f6ctQLTVa36trMuPJ7BAixr6Go87R1oUfe7Q/TqdzHD4F4piO92VHDczMplJXdyHTd2VwIkaaiXoyZyna1F60PqVp0mZ81O/+PjProZGQg4kvMB2WJk6ih3yD4+w7DMoovpdKxXVMg+PkTT0RpzY1b+0FfVWmjvz0MDhGE8f7eboMTlYDEHyfXdjBUyuxDcX+VL+94TKseAc92ekMMp8QUi9V0ssE7mZAvVF0Q+of+Y/XZpkB//sCS1RBOseLMWPEkNJ5DR+oTdVbyacbgk2FYWULr413qCE3p7KahsM1kbOkl7i3KetL0SOi5hx/b1E0+p7qkf9y+KSFOU+/FOurtnTNkyaurtzPs5aD5rYkra/UhlkrnoCPzn9K/NMfB9J+2C3wJyKCzec329K+FXVAYf7MPobEQCDoGEyHmrmxOdiI7kWzQJkdiFvm6vMM6euY9Wh8Qxq3arH9Xmo2pVZbk7XV+D4wdU1mPJkdQtQAmvq3bO0t89kF+Z4HHyeOyfd9ad/rhI81m7LSM3yGc0J3JubiBifV+KeVkfPdeNiR1nY/XOb04wuWP83SWNH6pMu/Fj7xZvKqfu4/3pHO01Fs6cs+rn//sScN9aSp/aSjFe/hbkObgHDZ6IR6MO1UoRmc6UC2vORG6cnTLVmK7ct021/381iGnZbsuHueghnZMBZHRxGr4gbHbX/VPSaF30ZvNFjNn6fL4ASj39onOzJ44f9tktGwI40Pl2V5KTmDE86QqMphnpCwK82wpffoLP+uHd7v8FnwqF6KeYnENRXVear9PdG1Ixl2/Ldvrn4VbPzSDUJN1rcHcqheoz72z6lfWtaVIYwvs2FLS2csxWJ0K1Qk0ITxMg3KR03pB1wPn+2IWh9ONDJBnlY/V4/m7jsNYpSGz8qfwah91NSPKod/22mW9KyGTd8zYLX2ybrUw3KL2C/HHhEeS++WejR6VZpP/Lehjkf70tPr6tYTOC5/97OKIVHeeeUQGGC1/8t0DnY+rPz3Dkb6z1mMDnrSVK87SDOa9rXmd10Oav0+bQOimhXxZy0bnaF+C6zKt3rEP/EKhOBpDLUUoR8PftKR5rULsrqqXgdwTIOjZ2UKpq/zk/FoeNDYp+Xx5AZnTdY/qUvjYVA3R0Pp6MeinZne4/JxOyajn7u9oC0Yy+FeU78yYsm8iiFNd5pPEYOjtK7eutvQ5Th4Ec1W++/J2cp5r4y/7KmX8r8K9BK0VaqNCzvbacH6ZJe/4Ts+lMFDv31cSxhvt90wj5lHr/AYDIcyCF/j4emXANp/l+okejAGx32Kqr/bls1r/nvBIjMVDC7UqyceqDawL93tDal/vKpfCBixcttf97NhuS7tod8GjEMNmr1nU0lsVjf1P5gxyh7UK6bp7a+eIfNqUqQcYn1emHayvMp63ikzOOoV5R3Z1PtaVOOuXvSn1rD343/3I6Oh8N/NEK8cifupd4fcdl62lFGw4+c7sq43zQWx1C7I+rb9d1Imst9RjxoGx/Wrx3dkeOD8rayMeI1ojhejW6EiUSYMjsrfjwNp6cpsYj0nq3e6cmi/kFBzmMhw239duuelT/ObuNXPyXBHv3dGlVVsdFk4vShufd+QlR2Hilk1OGkveVMvczPxBnlT7yLJezIiTCOe9izlMN7d0Pq098bYXJK81WvpZ/m7N37j5S7dRGkktVd7f12/TDI6x8/f+JumrOrNjp54tbpsdvZlX/+9puMbHLU07Go/K311rn6qzXNnBoNOuxbFYWI/SYcW6v/lUHZszbu6COr9sfi4BkftnXraCuuC/8SOU8cydBc+kWOZ5dT8v+hFbWO4CT2Y7bKWcwzD+M9keakXn8bbM6WXAvUpaCfc8jfaso1JPIZ4fZuO+tJa918q6fNSG7TXU19UmspDxZFSDok0A+4mjein/7LD9nNnT9zLfWn/JorLr7fHWKJ6uS+d22rQYdpcP3/qha9hjJnxF5jByWh//XfJpc+whekG2q/653+PwSkAV/9huXF8l/+JCsP8xdqMF9fl3fvIWF77f/Ss+B+pdCq64XGCGPPijx0L0og93WHSj/2cyDj3hV3xPGROiRZOL36/MGb9WP44/jRKLE5z3ZzKQN17nuUQxD8LSz/vOY+bx/JfPN+ZZRS7n+FZ9Gfx9MMyPVF6ReOKzvPrbzTjkRXHvPiE7UVi8BDFlBVDoe9f2u2iesGgJ561jy7/Hqa87HukxWXOK8LthH8J3LSfM7Q3+XlMy88xvlNxHaO/SMRm2oC3uGyU0O7b0t47rrsJljOmf2oNzkkzxvXHqNAzigfGb4+xv+af3N8D87fHvJxsh/rN0Gn7MMqZH8qXcpufBjA4dOrRdCks/u0sxk/U32/y/w5b4k2xlM+/vXxOXeejl18y3vmDXtDLGdcABueMC+DUNdhnujwmsv+1/86mnd2jNnjPb5SDBkrMUi/vHL2MRBmXuIzPdJt4snLD4CAeRjloAA2gATSABiqnAQwOoq6cqBmtnmzUAz/4oQE0UAUNYHAwOBgcNIAG0AAaQAOV0wAGB1FXTtRVGHmQB0bQaAANoIGTaQCDg8HB4KABNIAG0AAaqJwGMDiIunKiZtRzslEP/OCHBtBAFTSAwcHgYHDQABpAA2gADVROAxgcRF05UVdh5EEeGEGjATSABk6mAQwOBgeDgwbQABpAA2igchrA4CDqyomaUc/JRj3wgx8aQANV0AAGB4ODwUEDaAANoAE0UDkNYHAQdeVEXYWRB3lgBI0G0AAaOJkGMDgYHAwOGkADaAANoIHKaQCDg9+U2kYAACAASURBVKgrJ2pGPScb9cAPfmgADVRBAxgcDA4GBw2gATSABtBA5TSAwUHUlRN1FUYe5IERNBpAA2jgZBrA4GBwMDhoAA2gATSABiqnAQwOoq6cqBn1nGzUAz/4oQE0UAUNYHAwOBgcNIAG0AAaQAOV0wAGB1FXTtRVGHmQB0bQaAANoIGTaSBhcF6+/FlevXpNp4fxQQNoAA2gATSABkqrgYTBmU7fyD//+bK0GcLxnszxwg9+aAANoAE0UAUNJAyOiMjr16+1yWEmB5FXQeTkAR2jATSABs6eBlINjjI5aiZHLVf94x9j/sMADaABNIAG0AAaKJUGMg2OMjn8gwAEIAABCEAAAmUkgMEpY6kRMwQgAAEIQAACuQQwOLl4OAgBCEAAAhCAQBkJYHDKWGrEDAEIQAACEIBALgEMTi4eDkIAAhCAAAQgUEYCGJwylhoxQwACEIAABCCQSwCDk4uHgxCAAAQgAAEIlJEABqeMpUbMEIAABCAAAQjkEsDg5OLhIAQgAAEIQAACZSSAwSljqREzBCAAAQhAAAK5BDA4uXg4CAEIQAACEIBAGQlgcMpYasQMAQhAAAIQgEAuAQxOLh4OQgACEIAABCBQRgIYnDKWGjFDAAIQgAAEIJBLAIOTi4eDEIAABCAAAQiUkQAGp4ylRswQgAAEIAABCOQSwODk4uEgBCAAAQhAAAJlJIDBKWOpETMEIAABCEAAArkEMDi5eDgIAQhAAAIQgEAZCWBwylhqxAwBCEAAAhCAQC4BDE4uHg5CAAIQgAAEIFBGAhicMpYaMUMAAhCAAAQgkEsAg5OLh4MQgAAEIAABCJSRAAanjKVGzBCAAAQgAAEI5BLA4OTi4SAEIAABCEAAAmUkgMEpY6kRMwQgAAEIQAACuQQwOLl4OAgBCEAAAhCAQBkJYHDKWGrEDAEIQAACEIBALgEMTi4eDkIAAhCAAAQgUEYCGJwylhoxQwACEIAABCCQSwCDk4uHgxCAAAQgAAEIlJEABqeMpUbMEIAABCAAAQjkEkgYnJ92r8vCwkL+/y+G+qbDL8x5i3Lj8U+JhKLjC3J9N3k8cYH9xQ9dufHeVdk+eGN/e+TvdprxfJyXy79tSfdbN46htI7K78J16f79yKTjJxwr/llisc5dvCG9RHzW8ePEH88NnyAAAQhAAAKlIjAng7MgCymdrG02ZjI4bw5keyUwT4uzmRw7zbjBMWZsQS7/50BehcVkG4HonPi1MxqcY8c/Syzxcxd/3ZO4dbOPzxh/yIZfIAABCEAAAuUkkG9wGm0ZDAbJ///j2wPXTCz+uisji4N9fCaDo+7xr6G0Ppjd5NhpXvz1Z7L9p23Z/tNd+fSD87FZqav/ZSK1jcCKfPY4Jb+DA/lptomkY8Y/Syz2uYrTotzYNXlSAO3jGBxLlvwKAQhAAAJngEC+wQmWorI42GbCn/GId7L28ZkNjkr0GCYnL81Xe3flYrgc9an0flaJvEUjMHP8s8Rin2uM4A3p/mBKyz6OwTFU+AkBCEAAAmeDwJwNjr9UZTrZPLNRGO+MJiE/zVfS+220DHX7GzUT9ZaNwEzxzxKLfW6Up2gWzT6OwSmsN06EAAQgAIFKEMg3OO9dlpXVFef/XekHmz1sM7Hyy6uyaGZHfuUvVdnHs2dwDqSdSMNJ015eWlyR1rfZ60VHpXnw5cVwqerilweOwVmUX6w4aavY/qzOy/o3z/htU3JULPa5K3L1l4thvq5/rZaq7OMYnKzS43sIQAACEKgmgXyDYwxL7GfUWcbNxAsZNiPzoDrZ+PH4FtgIp90RRzMR8U2+zvfv3ZXBv6I72L8dlWbsKTG9BFcg/dylugLXx/gtyEJm/AXuFcZin3tduv8zlM8uGk7XpfuDczzxlJVNjd8hAAEIQAAC1SKQb3BmmMHRMzT2cszCZblsNgkv5D0mPpK+3gisNgOn/281VsLZCfW0llkCSyuKowyOPYOzojca20bgqFmTtBTnGf8ssdjn+qbzzbctuWzM1AeXo995TDyt4PgOAhCAAAQqTCDf4ISzBekE0sxErJM1nW2uwUm/d/jtD325bYzSEeZGXZMWU3gvGcmjX5pZjgW5+1QtdSWNQnT+HH6bKf5ZYkk7940Mv7gcmcGQfzTrNocccQsIQAACEIDAqScwd4Mj8kYO/pDsZLP34OQwmskc+PfJMzijr29E+4QWP5OB3sqTZhRyYprl0MzxzxJLxrlvDqLH6zE4s5QW50IAAhCAQIUI5BucX95OXzb6xn/fSraZOJBtM+sSdLIzG5yZzYFfKnZMi0vWhuGlaBNu/J0xtlG4KDf+M22Z7JEMozcDFiv+Y8U/Syz2uc4MzcG2tTylZqyc48VywFkQgAAEIACB0hLINzjhDEC0rKM3/wZLV7aZSBiYvz2Sq9b1ieN5yN6M5NGvgjQLLEvZt7JjSt+ovChXvzyQ6Dks2yg4+Qzjn9EgHDv+WWKxz03GN/qvq9ZSVfK4zYzfIQABCEAAAlUj8PYMjojYnexMBkdRVjMgK/kbitMKI8vgqNmcT7ceSf9v7lSMbRTmZHCOHf8ssdjnphkYe79R2vE0enwHAQhAAAIQqAaBhMGpRrbIBQQgAAEIQAACZ5kABucslz55hwAEIAABCFSUAAanogVLtiAAAQhAAAJnmQAG5yyXPnmHAAQgAAEIVJQABqeiBUu2IAABCEAAAmeZAAbnLJc+eYcABCAAAQhUlAAGp6IFS7YgAAEIQAACZ5kABucslz55hwAEIAABCFSUAAanogVLtiAAAQhAAAJnmQAG5yyXPnmHAAQgAAEIVJQABqeiBUu2IAABCEAAAmeZAAbnLJc+eYcABCAAAQhUlAAGp6IFS7YgAAEIQAACZ5kABucslz55hwAEIAABCFSUAAanogVLtiAAAQhAAAJnmQAG5yyXPnmHAAQgAAEIVJQABqeiBUu2IAABCEAAAmeZAAbnLJc+eYcABCAAAQhUlAAGp6IFS7YgAAEIQAACZ5kABucslz55hwAEIAABCFSUAAanogVLtiAAAQhAAAJnmQAG5yyXPnmHAAQgAAEIVJQABqeiBUu2IAABCEAAAmeZAAbnLJc+eYcABCAAAQhUlAAGp6IFS7YgAAEIQAACZ5kABucslz55hwAEIAABCFSUAAanogVLtiAAAQhAAAJnmQAG5yyXPnmHAAQgAAEIVJQABqeiBUu2IAABCEAAAmeZAAbnLJc+eYcABCAAAQhUlAAGp6IFS7YgAAEIQAACZ5kABucslz55hwAEIAABCFSUQKbBmU7fyMuXP8s//jHmPwzQABpAA2gADaCBUmkg1eC8fv1a/vnPl/Lq1WuZTqf8hwEaQANoAA2gATRQKg0kDI6auVHmBmODsUMDaAANoAE0gAbKqoGEwVHLUszcIOiyCpq40S4aQANoAA0oDSQMjtpzgzgQBxpAA2gADaABNFBmDWBwWFPF0KIBNIAG0AAaqJwGMDiIunKiLvOIg9gZMaMBNIAG5qMBDA4GB4ODBtAAGkADaKByGsDgIOrKiZrRz3xGP3CEIxpAA2XWAAYHg4PBQQNoAA2gATRQOQ1gcBB15URd5hEHsTNiRgNoAA3MRwMYHAwOBgcNoAE0gAbQQOU0gMFB1JUTNaOf+Yx+4AhHNIAGyqwBDA4GB4ODBtAAGkADaKByGsDgIOrKibrMIw5iZ8SMBtAAGpiPBjA4GBwMDhpAA2gADaCBymkAg4OoKydqRj/zGf3AEY5oAA2UWQMYHAwOBgcNoAE0gAbQQOU0gMFB1JUTdZlHHMTOiBkNoAE0MB8NYHAwOBgcNIAG0AAaQAOV0wAGB1FXTtSMfuYz+oEjHNEAGiizBjA4GBwMDhpAA2gADaCBymkAg4OoKyfqMo84iJ0RMxpAA2hgPhrA4GBwMDhoAA2gATSABiqnAQwOoq6cqBn9zGf0A0c4ogE0UGYNYHAwOBgcNIAG0AAaQAOV0wAGB1FXTtRlHnEQOyNmNIAG0MB8NIDBweBgcNAAGkADaAANVE4DGJx3JOrJi560H2zK5p2WtJ+NKyckRhzzGXHAEY5nWgOTsYzH/v/Ja7SQpYXx8570vqMfyeJjvn9nBmdw3xPPS/6vvb8qm519mbwjo2Ey/k5/Dluy7HlSe78u9Str0v5uloo7ks41T7z7g3dvinRjMymY7kQ3THNvlGaKYRauRc99S/mqst7JW8E6U1SDZ+C8UV9a6xekFusjzsna/Z4cztXoDKQZS8P0STW58NGmdA6KtnfxMtH927WOjIz2nzXF89ak8338vNx+5/uOrHmeNJ8dcc2kL5s6D00ZmPSmU5koYzg54lrr/NxYKnLeOzY4G7Kz15d++L8nnburUvNqsv71qLKNwvCPS+LV4mIsLq5/n8EZddbE8wrGXbRyzlhxZophxnsXKoO3lK9Cab+N/HDPyrYzZdXUZNiS1ZontQ83ZGd3KIeqo34xlO72htT1900ZvJxXxx0YnN/sWP1QX/pPOrL1UU282rp0ZzElQX16pwZnOpXx07a0n9ozOH6+1jrV7UePo+93bHDSOsuJ9O/UxLvSlsOKNr4J8c+UTwxOYZM1E9eCDSYGB0PwNnTFPX1dTYbSvORJ7ZNO+kzNi46s1zxZuj+Y0yx/YHDSZsRf9mWz5kn9q8OZNZ9o49/mDE6qdjA4aQboFBicqWSN0kd7O9K4tiznPE/OXVqTxnZfRgWnKwtdO+rLzq01WT7viVe7IPWbTek665rD7brUb/fk8EVXNj/yp1DDKcTXI+lvN2Tt0jnxvHOyfK0hO3uWgx71ZPNKPbr/FbVEtSm90VSmwbGd525HO5SdK3XZfGzuU8TgjKR3uy717aFMVZwhsw1p7qYs/8XirsmFDzekZcc99WOov1/z82XHnVK5NKMPfTbnLqk82vFPZXpkei4D9Tkvhii/4+c7sqH5B9PBz3cixnasabzT4rJGRUfmy77/QVvWr9Sl+TRlivt5S+pX1qV9EOXT1qdautx44Go7yqNbcY0mzXS4+ZyqUTtG6/f89CfSv1eX+t2ejK1rppOBNJUWPnc6m++70rhSl61vVN6tuIto0b7/rNemlV9Mx/F6dri7GdVVVddTliOSXAZxBireWLop9X46FVMmo/FQdm7W5YKaiXh/VRrbKfcb70v3/oaE9e1a+lLJ+LuuNMN71WXjflf2x5GmXJ3kfo7lIWgDLO2H19qxZbSR+tyi58XKeyrjxw3xvFVp/zU7H37/sCHdH4NzTB3/fixDNcuj2qnaBVm9tSMDc46TTpifaY7BmbptraVl535h+QbfFzI4NiPdXzjlbA2mDp80g3YtWDqL9UvxuEaPN6V+xe8n/W0QQV/gxBwxyGZdtXNOgcEZSfemJ569fjmdymFnXWqeWoPtSG+vL71OU9bOB07/iILzr61J/Tc70s24dvK86e+L+WhT2rtq2awrOzeXxfOWpTWMBKCFe2VN1i5dkPU7benutqWnK+OhdD6piXd+TZqdnvT3etK5vybn1HJbJxgBTEYy3OvLzm888a5s6Vj6e0MZqXVSS8xxUblO3K10UWzRdcE5NxvSeN/EY/LjLv9lxe3JcjhKGsv+Xl+69+rieWZZMYg7hf34oC/9r7ek7nmy8aW/BDn83nT0RdJLy1NeDEF+r63J2vlVaTzoSLfTkYFq7LNGTgneI+kG5dfS5d+X7uf+cumm7qinkp8vN+ZDaV/xpHa374w0J9K/WxNvqSXDgF2Wtr1L9lR8drm7jWm2Rt0Y/c9F0vc7lU3p22v6z5r+HglnudXvpBrS053tLFp045vl2oK6Csp941ZDLlxrSudJX/q7yhSrQU18OWL0ddDmPOj6yxe7TX/p5I5dplnpWvV+OhVdJh83pPFhXTa21f3Mcrwny58PI42YGYwPN6WtYtvrSft3qh2Kd/qTZ6q9ito0nYcPaxLXjMsz6/PR2tdty8uBP7uil46CNvI3dal5y9J8Zur3VKZFz0u0HWr2Ptn2R+1aEH9Qho3HwZJMUMcbt+pWG7+ly0rxGNqaTaSZY3BGXdnwPImWeWasg3Yf5rZDRcrZ5PPOZlyrqpxjeojHNfl+KP29HR17/V6g3QN7+SpLB9X//h0bHNNZ+p2gMhWta+ek9uGm9F5YsP/allXbKBiRvvA3YW3s5hRe1rX6e09MJdEjtTtdZ1lsX3ZUJ/X5MJyi1A2VW6GVAXu4qhvIjh23+v4vat+KNdowjZ0tfpWfRIdr8n8Cg6M2tcXiOfQ3KH/cCfOp406cN5XRbkM3XLa5y5pZSzRAOfmZJb20+6bHEFTw2rqT3xkMTqClraeGu/o5keHDLWnbswCZ5WRf5//ua2IrbgomfdmqebL6MDC9gQ7X/uJMg4+60qh5svxHo714I2azSTU4KRq1rwl/L5q+zncttuFR7yVTptJbsgYBQSd1sxvMdARxJzSW1GIYk6nfZgRd4NrCugrKTw2gYkvgQflH5XAonY898e7FN/NPnrdl62E0u+aXcVJ3br33240l2YrN6E1kcG9JPG8r2hz6dMvfjBqrtyPpPWhFs8mBhqIBSKC9lwPZWvJk6YHRTFKTScZTmRbSfmDMY6Zb3T/IQ2jYi56XFlu2xuNx+21iuHSkzYMnS/fiM4mTp1uy5HkSr9NuuoHBcffg7LZk7XzNn6139ZiynJVaB+023jU4Rco5S6vB0tlSbtvg9htuvs/m53dscMyOdfvnsmw86Mq+tYns8Kt6xp6coDG95Uydh4Kcir7WGWGaynL0LvOJ9H4XH1FoISf2B/mj9bDCWelPgx3uxkiptN3KoOPJ7DhdoRZpBIJzUrj4Da/Z+xTMMqRUWLUk1FKNZViJspcODc/Yz9T8zJZe7H4B01yDExtZBxXYbVhM2bjxjbqyrkZrxniY89yf7nXucftzcK6ZAVL5mextSc0afeXpUxuIsOPILndXT+kaTW/Qiqfvl13UefqfG4+H2jRH2h9Is5Yy6j1Si2nxvQUdB2Vi10dfZ4GhCeuCmtVQdb+dvhdEl7PPIMq7lQen3usyCcvSOs90dGYj63M1K7YkW3vZg7bJN5viec5sWqA7XZ5p6di6dH8vov0gP7aWw/r517bUjcktep4bg/48lJbaHhCWgcUpdr5TF3Qdt022uW4gW7EZGPO9/TMwOGlPUl3akFZsSd9J14optQ7mGZwC5WwGvUmtBnGE5jstLrffsPN8dn9/xwbHdLQG+ETGw4401HTxpWgKX4tHrffqvR/+ng7zu94vYwvJEp2qgK7wwkrpnKe//3FfBrttaT3YlA2Vlt5zkmJwEun5YgrXO2Nx+muh0TRnRkyZHacr1DQxG37mZ/Y5cYPg3ttc7/8c3PPE+6QbPuoYvzZ+boJran5mSy9xz2mWycrOb/ElqokMPlfT7cHeiHtt6Q4PJfGYe2q+sliM/eXW0Hi5sxsZWjDa1J3funTVHi0zm5HS+Lsadz+ncTTf5Z4bSz8YLJjOU3Pw9znpWQxTJ4YtWbIMXF7cR+spu1zj186gq8zyS6alloHUUzv+fo4taasneqyB1zTYv1Gk3mdyThjwQ+neVHv41D7DDdnc7kj/YBwtYYV14Jwsx9qZoF3UbZbbrmbp03xfQPsBN7OnzrS/+mew307vRSx6ntF47GeyDIxO4z/98g6NZYKhyVe+Lvx7+uckTNVkLMNOQ29bOPYsqqkTKo+JGI8uZ2Nwwj2eIauAU3j/NG5F8m44nZ2f/2aDE4AOps3NaEE3DkvrsvWgJa20/w9TNuoFYshsWEKx+Gkefr2hNy/rjX/3WtJ62JX+cCR91cmHQsrqkHwxLX2ylR7fg/jL/FJjymx4XaGmidkVaPY5M3UM6l1FVt7j17ppOp9T8+PmJX5NKhennNJjyM5vsmEJ0kyNT+2z6Un7XkPWgkZb7anaGVr7CzKuizfAUb5i+1GC0a29pJqb51ijmJ1H9x7u56zY1Pe558bSn8pUm5e63gCq82VmMvUI3t9zo2edzPe67LLjTi/LiF1xczSDrjLLLyPOH/el93BLGtf8jcHq4YG1bbNnpni9z+TsMtbMJjJ61pGdOxtS1xvm1SPT0bK9z03tNctoDx/04stvTh3K0kOu9gNuq7ey0mz5exGLnpcaUzAAiOnH1kO87pr+IbOOBwbUHlwm8+6XYcLgBPEdfrVqzZZlaCSlHiXK+xjljMFJKftU3RQ/73QYHEeYyUazeIbi0/xZ1w31tLq7hqs7gEIGx1/OCUcURxRCQvzq/KBhSK4Xu413diWLKm/2OfFOJbkMFd0juQchfm0Wy+D71I5ktvSiWKK00mPIzm/Y+MX2NES8k6OjKK3peF921EsVV61XFqTmy7rGLftgv4SaZvbNTnxpIU+f/nKi2Z8R5DGclo7SdPXkfk7jaL4rnr5Kzy+/tc6h3gwaLV/6329+M9Qbq6Pv1TXZZZNellG+il87g64yyy87TsNqOh3L/pdqT53Z8OunW6TeZ5ZJasdnMwie/FqKNqyn6SiK0bnW1WPRz672xz1peJ6EpiLrPkXPy7jeX8KNP9jh5s03Hda+xkyGbtuZxibf4ITth15CPEEdzIzRikk92WmVs+kTkm1UEEc4+EzTbpG8W2lnlIfLvuyfT4fBcWZw/JFjmujHMviqLd28HeJ61Olu7lM7/fv6EddNvRM/QwzqMdil+CxGVkM1fLAUW1YLhfDjQNoPu7JvPa6Yfg9/vTja4+CLz2yUi0YhaWJ2hZp9jtup6LiXtpIvztJlUJOtvWj2wr823kGH+XQrSNCRuA3iLOml3Ts9huz8mg2U9qyJuq9vHqK3hE6+70v7Xid8ssmkrfc02C83zMiXOT/5M9h0ebMt7VtRJxWel6XP6aG0V+Pn6yVDs0RkeAcbS+2ZtnR9uRoJPs+Qvop5cL8mnno6rxbf86C+r328JqtmL4aJ750YnKkU1lVRgzMZSf/hlnSsJyh1menZqkg3Ret9Zpk4HZ9aom9t951H0YOZDdOh/aie7qmFD0iEWppOZf/xjnSejmJLWvbxtN+Lad9fbq2l7KWaHvRkpzPwnwadFj0vQ4/KROstCvYThNG5+iWAnr35Pm35x5yf0a6H2lTn5Ruc+AzOVI5dB49Tzkdp1eghtY75+Uo13y/HyaX3GBPDr3o/37HBcZ+i6knngf+2SnsPjho59W6pdxusSvPJvozUmy1H+9LTj/GmmJdYYaVc+6IvTf2Wyk3p6zX1YK/EpS3pHoz8v32iz6nLsqpsoZBypvR/7OmnXmofNaUX3GN00PPTcQxEemMXdIRqCjx4LFW/ufPjVVmNbZTL6czDfGef4xqcqRV3/4X/N18On+3IRtoj+LozrMnq5+rRw32nEXYrgz+6VTzUo/mhwZslvTA/1r1TY8jO7zQwCl6tLpsPo8f3L6yu6sfYw9FR8Oczlu/2/DenjsdyuOc/Fhx/qVhGvtJiNd/pmNVG+rgp8DsbS597h4H2BrKj9mE4T4X5o1tPzl1r+a8YUI83f7gmq6sFNWriif0snr6ON+PRcH/jqxd7/N3PX3bZJLQYi0uV+QzXFtXVUZ1GuMfJdLRb0gvqxdi0G0tNGZhHj6108+p9ep1Pds5qdka/yf2rod/OjUfhXpBokDOV4R/9V1g0Hg4CvR7K4KG/ZyR6Eix674//TiKrHtmsi2rfnPe7tgzCtqLt75m0n0orep4dg/178DK/8JUbw6EMw9dupLwaxDEPkYmbweC4T1E96UhLPwIfN1PHroNOjIXK+Sithv1SWj1R9VrtZW3o1w0MXgQD1WDyoGY9SRvxytCHXTYl//0dGxz76Sn/d/8lZz1nM58CP5bBA/VeGeua82vSSnsZlVsIrw+le2c1du251U3p2ssWPw70I+rh38dSHeLjw8QehcyGSqXp3kNtFLzWSrxsKvMeL/el/Zvo76/417qVNE3MrjCzz0ntVBJxn5NV9ch84iWKExlumzLw92LkVY7JcEe/q0gxjY0kCqfn5kt9ToshO786vh/70lz1N2566t0htzuyf5D8Oy/jp+rRUEtfXjqHzHy5ugs/+6bIfvdNnFtS2wl96ntNZP/hhn5JnNZpoH9XT+7neFppTIumPxXzVKAXbpwO7hcsTdRCg2DSyS6bVC2GzNT1M15bRFdHdRp2/In7eZJaLmnnOfU+s0ycjk/pe7+z6W9uNm1d7YKsbw+dAUXaefXk3/HT908z1qZ8/J+FtX/QkU39HhZTT4L6FNt8PZVJwfMytTnel47TZnvqHVdpL0ZMMDR5c9tO87390z8nbPcNcy942eGTQ2c27Jh1MBFjWvk55XyUVnMNjnr8vxeVlamv33f1m6BTZ+Jidc9mVJ3f35nByRT2kZD9P3Y4HkdLJ4Xv9Tq41qmM9vX60fHxCafwzF/AzUnHTjPxu4rTjBCP5DFH8b0s+ld7JzKeIW+TrLIqnF5aHmeLQTOeFCvXohrIzFeizHIeJ7bPLaBPo5Xiaaexy/huhvRNHKfy54l0lcImuN+Rmj9pvbe1MDXtXPwJqjTeoV5j1/v5UC8rjL1nJ+Uc+57hvRKDG4eLYXJUO1X0vMy4DIdjtPeZ93Tycszz5lMHTf6OLme7nGb6XS1J2Xk8qmztcyv2ewkMznzEOZNAKlbI5P3dasjfR2U2pr7btCnrs817+HlNErNttGfhi1upH2erfmBwqPxU/jlpYPxkS7+7Sf3docRbZ+eUBg302WqgZytvf4kv+aI4mM3GEV5V4YXBoePB4MxFAxPZ/9p/Z8jO7lEbsmlAq9KAnrZ8qCWn2PLEXLSNXk9bORNPMU1icGgAMDhoAA2gATSABiqnAQwOoq6cqBndFBvdwAlOaAANVFkDGBwMDgYHDaABNIAG0EDlNIDBQdSVE3WVRyTkjRE3GkADaKCYBjA4GBwMDhpAA2gADaCBymkAg4OoKydqRjfFRjdwghMaQANV1gAGB4OD0yhDuAAAIABJREFUwUEDaAANoAE0UDkNYHAQdeVEXeURCXljxI0G0AAaKKYBDA4GB4ODBtAAGkADaKByGsDgIOrKiZrRTbHRDZzghAbQQJU1gMHB4GBw0AAaQANoAA1UTgMYHERdOVFXeURC3hhxowE0gAaKaQCDg8HB4KABNIAG0AAaqJwGMDiIunKiZnRTbHQDJzihATRQZQ1gcDA4GBw0gAbQABpAA5XTAAYHUVdO1FUekZA3RtxoAA2ggWIawOBgcDA4aAANoAE0gAYqpwEMDqKunKgZ3RQb3cAJTmgADVRZAxgcDA4GBw2gATSABtBA5TSAwUHUlRN1lUck5I0RNxpAA2igmAYSBufly5/l1avXdHoYHzSABtAAGkADaKC0GkgYnOn0jfzzny9LmyGcbTFnCyc4oQE0gAbQQJU1kDA4IiKvX7/WJoeZHMRfZfGTN/SNBtAAGqiuBlINjjI5aiZHLVf94x9j/sMADaABNIAG0AAaKJUGMg2OMjn8gwAEIAABCEAAAmUkgMEpY6kRMwQgAAEIQAACuQQwOLl4OAgBCEAAAhCAQBkJYHDKWGrEDAEIQAACEIBALgEMTi4eDkIAAhCAAAQgUEYCGJwylhoxQwACEIAABCCQSwCDk4uHgxCAAAQgAAEIlJEABqeMpUbMEIAABCAAAQjkEsDg5OLhIAQgAAEIQAACZSSAwSljqREzBCAAAQhAAAK5BDA4uXg4CAEIQAACEIBAGQlgcMpYasQMAQhAAAIQgEAuAQxOLh4OQgACEIAABCBQRgIYnDKWGjFDAAIQgAAEIJBLAIOTi4eDEIAABCAAAQiUkQAGp4ylRswQgAAEIAABCOQSwODk4uEgBCAAAQhAAAJlJIDBKWOpETMEIAABCEAAArkEMDi5eDgIAQhAAAIQgEAZCWBwylhqxAwBCEAAAhCAQC4BDE4uHg5CAAIQgAAEIFBGAhicMpYaMUMAAhCAAAQgkEsAg5OLh4MQgAAEIAABCJSRAAanjKVGzBCAAAQgAAEI5BLA4OTi4SAEIAABCEAAAmUkgMEpY6kRMwQgAAEIQAACuQQwOLl4OAgBCEAAAhCAQBkJYHDKWGrEDAEIQAACEIBALgEMTi4eDkIAAhCAAAQgUEYCGJwylhoxQwACEIAABCCQSwCDk4uHgxCAAAQgAAEIlJEABqeMpUbMEIAABCAAAQjkEkgYnJ92r8vCwkL+/y+G+qbDL8x5i3Lj8U+JhKLjC3J9N3k8cYH9xQ9dufHeVdk+eGN/e+TvdppZ+Wh9K5KXz8Wlq3J394W8OjK1lBPeYtyGYZTHt8A9JUt8BQEIQAACECgbgTkZnAVZWLwhvb/Hsx91xDManDcHsr0SmKfF2UyOneZxDY657vxvezKTLXvLcScNzpy5x4uPTxCAAAQgAIHSEsg3OI22DAaD5P//8ec2XDOx+OuujCwU9nHTOVuH83/911BaH8xucuw0V5q9ZOyDgbz42ZnBMfnc68p2Y0UWwxmsRflsMNsMkrzFuA/+7sdi51GZsblyzy8VjkIAAhCAAARKQSDf4ARLUVk5cTvahYVFubEbWRz7+MwGRyV6DLNQNM3YElUsn2+k/x9m6W1BFv9wkJX97O/fYtwqUTuP/mzTnLln54wjEIAABCAAgVIQmLPB8ZdMuj/4ebc74mMZHHWbGc1C0TSzDY5jIGLmZ4YyfUtxqwjsPJrlNLVEOFfuM2SVUyEAAQhAAAKnjUC+wXnvsqysrjj/70o/2Jhid7Qrv7waLe38yl+qso9nG5wDaSfScNL84Hy06XlxRVrfZi8b2WkuLjn3Uen82Z+RyTI4b/7el7sXoxmcUxP3alvMXJKdx+NzP21SJB4IQAACEIDA/AjkG5xwL0rU4S8sXJdusJnY7miv776QYfNiaESufz2KzTRkG4WhtFLTsdN0fn/vrgz+lQ7Bjimc3bDvH8zIxAyOfdz+ffG29H9OT0fkHce90BL/2bX4DM7xuWfli+8hAAEIQAAC5SeQb3BmmMHRBsZellm4LJfNJuGFvKeoRtL/07Zs5/xvNVZC42QvxaThtw1O4Rkc29SY39+7Lu3cR9TfcdwZMzjH555Gj+8gAAEIQAAC1SCQb3CO2H9imwkzQ/Pm25ZcNibB+mmOz4zth77cNkbJ2meSdZ+0mNLOjc3g/PK2Nlh31xZDI3Xxi6FkL4Sl3dH57i3FrVJJy+PcuTvZ4SMEIAABCECgTATmbnBE3sjBHy6HRsEsEx3L4MxoEhT4tM4/rUBiBscYub89kquhKbsqj/6WdmWB795i3Nl5nCP3AlnkFAhAAAIQgMBpJpBvcIKZjcTy0Tf+o+DZZuJAts2sS2AYZjY4xzAJCnR2TPFiSDU48kaGX0T7iBZ/35/9bcZziPvirz9LXbJ79G3y/UNxrnPgHsfEJwhAAAIQgEApCeQbnHA2w9nkG8x45JqJ2GxI3h6cFG5vRvLoV0GaBZal7DvkxmSdmG5wROTnvtxeNPm9KNvm0SXr2sxf5xS3mfVyfxozk5vHk3DPzBgHIAABCEAAAuUi8PYMjoiM/utquFRlOufCeNRMyEr0bpei1+V2/tZNMg2OiBx8aS2xBY+8W5fm/zqHuF1jYz4bhkfl8UTc83PHUQhAAAIQgEApCCQMTimiJkgIQAACEIAABCCQQwCDkwOHQxCAAAQgAAEIlJMABqec5UbUEIAABCAAAQjkEMDg5MDhEAQgAAEIQAAC5SSAwSlnuRE1BCAAAQhAAAI5BDA4OXA4BAEIQAACEIBAOQlgcMpZbkQNAQhAAAIQgEAOAQxODhwOQQACEIAABCBQTgIYnHKWG1FDAAIQgAAEIJBDAIOTA4dDEIAABCAAAQiUkwAGp5zlRtQQgAAEIAABCOQQwODkwOEQBCAAAQhAAALlJIDBKWe5ETUEIAABCEAAAjkEMDg5cDgEAQhAAAIQgEA5CWBwylluRA0BCEAAAhCAQA4BDE4OHA5BAAIQgAAEIFBOAhiccpYbUUMAAhCAAAQgkEMAg5MDh0MQgAAEIAABCJSTAAannOVG1BCAAAQgAAEI5BDA4OTA4RAEIAABCEAAAuUkgMEpZ7kRNQQgAAEIQAACOQQwODlwOAQBCEAAAhCAQDkJYHDKWW5EDQEIQAACEIBADgEMTg4cDkEAAhCAAAQgUE4CGJxylhtRQwACEIAABCCQQwCDkwOHQxCAAAQgAAEIlJMABqec5UbUEIAABCAAAQjkEMDg5MDhEAQgAAEIQAAC5SSAwSlnuRE1BCAAAQhAAAI5BDA4OXA4BAEIQAACEIBAOQlgcMpZbkQNAQhAAAIQgEAOAQxODhwOQQACEIAABCBQTgKZBmc6fSMvX/4s//jHmP8wQANoAA2gATSABkqlgVSD8/r1a/nnP1/Kq1evZTqd8h8GaAANoAE0gAbQQKk0kDA4auZGmRuMDcYODaABNIAG0AAaKKsGEgZHLUsxc4Ogyypo4ka7aAANoAE0oDSQMDhqzw3iQBxoAA2gATSABtBAmTWAwWFNFUOLBtAAGkADaKByGsDgIOrKibrMIw5iZ8SMBtAAGpiPBjA4GBwMDhpAA2gADaCBymkAg4OoKydqRj/zGf3AEY5oAA2UWQMYHAwOBgcNoAE0gAbQQOU0gMFB1JUTdZlHHMTOiBkNoAE0MB8NYHAwOBgcNIAG0AAaQAOV0wAGB1FXTtSMfuYz+oEjHNEAGiizBjA4GBwMDhpAA2gADaCBymkAg4OoKyfqMo84iJ0RMxpAA2hgPhrA4GBwMDhoAA2gATSABiqnAQwOoq6cqBn9zGf0A0c4ogE0UGYNYHAwOBgcNIAG0AAaQAOV0wAGB1FXTtRlHnEQOyNmNIAG0MB8NIDBweBgcNAAGkADaAANVE4DGBxEXTlRM/qZz+gHjnBEA2igzBrA4GBwMDhoAA2gATSABiqnAQwOoq6cqMs84iB2RsxoAA2ggfloAIODwcHgoAE0gAbQABqonAYwOIi6cqJm9DOf0Q8c4YgG0ECZNYDBweBgcNAAGkADaAANVE4DGBxEXTlRl3nEQeyMmNEAGkAD89EABucdGZzJi560H2zK5p2WtJ+NMRXviHuRhmL8vCe97yiTIqw4Zz4N7ywcJy/60ts7lMkpqjOzxD/TuS/HMh77/89Efo9VpmMZPunJ/o/vXoszleWx8jbfPL0zgzO474nnJf/X3l+Vzc5+tSvvsCXLnie19+tSv7Im7e9mKcSRdK554t0fvHtTNFENzaRguhPdME1ez5K3o84dSFNpZp55/74ja54nzWdB2pO+bGpdNmVwCipk2RqQUx9vUN5rnVFBHR+lyalMnzXF89ak831wrvu5iI5cHWZecyjtVdVuWulNpzJRJmBSINbM+56ua8ffdWRz9Vy8j6jVZePhUMbzzEPAPdkXnZPlmy0ZHMs0JNto3d9d68holtgL6miyt+lzirWLb6P9PV0aOU5b844Nzobs7PWlH/7vSefuqtS8mqx/PccGaBZRvYNzh39cEq923A40WXmOU9DHuWbUWRPPKxh34QZ7lkrzDgzOdCrjp21pP2UG5zgaOfXXlN7gTGVy0JX2rj0I9OvFXE3bO2gHs7Ry+LihB4DnrjWls7cvo/FYRgd96dxfk3OeJ+duduVwXvEFeqjf61r9UF/6uzuycckT71JThjMbx2Qb/TYNznQ6lsHDdtyMvZX2d5a2+nSe+44NTlpnOZH+nZp4V9rzE/G8KsOc7nMssYdpJytPVkMx7+/PisGZNzfud4oauwoYnKSeKmRwvu/Kes2T5fuD1Jma8dOmNj9zM3N5evhrW1Y9Tza/KTprbXSebKOP1eYXnMFJ6mEqUwxO6gztKTA4U8nqSEd7O9K4tuy7+Etr0tjuy6jgEkiha0d92bm1JsvnPfFqF6R+syldZy/GcLsu9ds9OXzRlc2PLkjNXt54PZL+dkPWLqmp1XOyfK0hO3vWTNSoJ5tX6tH9r6glqk3pjaYyDY7tPDeVxPwcys6Vumw+NvdJVp6kwEfSu12X+vZQpirOkNmGNGMjvyCNWNw1ufDhhrTsuKd+DPX3a36+7LhD42XinYpm9KHP5twllUc7/qlMj0wvulc8b9EMzuSgE/I/dykjX9OpHFnuiYbAYmfnbbwv3fsbohmkaGPyzZbUr2xI56/J2Ee7Dalf2ZL+yzTeKTqx0w1/35f2el3qnw9Slm+H0rpSl/WH+2GlHn/XlebNulyoeeKdX5a1Ox3ZH8djM1p2p81HjzelfmVHhibt5zu+Tl8cSvfOqn/P2HR4/L66zGxeui5sSseqSz6vLenFYprI4HOll6YMYqPmkXRv1aV+r6/zHsY9HspOkEe1tN3YTu8UYxoKO7RDGT9tyYauqzW58FFDdpxZuzAdwyH4meDjdkTuZ3Vdmubt9CwdHj5pWnHFuam82HH5sfhtor/kHdR7J+YYg7RjaeV1kOzYbV2p9DbudxO6SpR/Sn3Jisef3d6M6koi1okM7i+Jt9QK9RnyOJEeTPtqa9kxjidoo9MMjs0yrKOmjVD5Njp6MZbhw4asmvb3Zkv6qs8wbJy4NI+89tdcdwZ/ngKDM5LuTU88Z73ysLMuNe+crN3vSG+vL71OU9bOe1L7pHPkTI9/bU3qv9mRbsa1k+f+yKD20aa0d9WyWVd2bi6L5y1LaxiJSQv1ypqsXbog63fa0t1tS093aofS+aQm3vk1aXZ60t/rBVOqNVnvHPpinIxkuNeXnd944l3Z0rH094YyUo251cCFwtUCdCrZtJjB0ft0bjak8b6Jx+THXf7LitsfRfkb+8ayv9eX7r26eJ5ZVgziTqkk44O+9L/ekrrnycaX/hLk8HvTWBZJL+KdxsK7ti7r5+uysa2mlQ1nO17/+kKaSXBP4ftyIM1LntQ+3JAdo43f1KXmLUvzWZCvYO/O6sOgrEMuwZ6JW71gRJqVf0sn4bVxDodf1cWrbUk/1vlPZbK3JTVvKdTp5JnScqR3Pd3+YU282rp0XkT3TGt0Fe/EAEM3tHVZu7YsF9ZV/ehK+7Gbz+i+00xeVh4D7vHR8UCaypB5tWhPlGIx7knD86Tx2F821HF/3JDGh5YG9NK2J8ufD1MMoBVbkG79k3VZDutqV3Z0eVrxTacyGx9rT4zpmMyenOlIukHb0NL66Uv3c38pPsx/EFfjzqZcUEszT4JlElVu3qq0LeNsxzX5fij9vR3Z8DwJl1kOZlxenQwDfW9KW6W715P271TbF083U1eXmjKwO+bM8rfqS6rGD6V9xZPaUeb5WTOm93noIXVG6HlTlmwtJtoKo6uj22i7zFQd8/ubmtRvt3V/1n/SloZaElu1Vi4CHW3eWUv0XV7NMoFOXPntr4n5bP58xwbHdJZmH05XWtfOSe3DTelZDfFUTxXGGx/d8b3wN4hu7OZU6Kxrg+lH02jqkf4dd213X3ZUhft8GLplLVS7Ywsq6uHD1UQHomI8/Ivat7IhXWuzmit2nRdHpPo7fe+jK090rhFt0EmrjYg2x+mhv0H548gU6rgT501FzTqoDtw2d4mOL7WRCmLIyM8s6SXz5bNwG151nm9mlmXHbNjOKndXM4k4XYMzkf7dml6LjzXi04kM7tkjyeA8u4FSfII4tvZ8IzSLThL5d+7lH3fSnfRlq+bJ0j1npuf1UFrKpIVGa9YOPGkgE/FpPQTxLG3FOz018/BgWbxaQ3q6LvicY53ZsCVL3pqsXfNk6Y9RnZt8ozZRRnXIr4NLsvXUmGaluaA8vK38zeFBecc6iCDuwf0gvmBWKbWeZhrAHIMTaG7rqamffrzDh1vSNjOlJq5rUd3UfF/2ZVOVp8UjGZfbRtjpFPj96Za/aTnWVoyk96AVzWAHulJLR7GnmV4OZGvJk6UHpryK1pe0uArmI2BlzOE89BCaw2A/aO/hptRr52Rt2zLMibbC5MGN221DknVtcM8Tz2qHdVl/35PWA2tGTBuclLrs9F2pg+PMWE3MZ/PnOzY4yaeo1IzJhipka0SgR66pe3LUfh1PPKvRdhtdf9SbttenyJMHE+n9Lj6bpCtTIhZ/5FH/KmVUG4zsjZFS8SUbqLc0g5PCxTdchkfeiGkoLdVwWQ3ryQ3ObOm5ZTmdBgYnJV/6WM0TUwaFNZNoCJzGKSg/05jGYvprW+rWzImaUq559dho2zc0ZtZlNp3E0tKdcBDbHX+pRh8POh4zc+SbAauz1df5jdn4cUM8rxEuC6XqMLMDj+crGVvQYAa8UkfEzkyMLiNrqUF/vtWTodrMbtWxwf1abEZXx21dF8ZiOupw5iSlEQ/K29Z1eH1wzJT1bHws5u4Mzqgr654na4nZPSu+IG27nfDjCsr8XvTUZDIut4O17muVf5hP97vn/ozI1l72QNHX1WZi9lDdM1aOs9QXN44fu3omKlU7sXPj+Z2HHpJPUXmiZ/MzlhHjLOPxTFNm2d0yG35eE29pS/qxJVqn3LSOopnZKM0gvb8E/U2iDcvrT5w0Ylyrf+wdGxzT0RqwExkPO/5U3aVojVWLQ63j6r0f/p4O87veL+MsZ0VCyDATWYX6474MdtvSerApGyotveaZYnAS6fmCC9fAY3H66+N2pXXFruNNE6mO8+jKY+fX/93ppK38xk2Ke29TDv5PPcr4pBs+2hi/Nn5uIobU/MyWXuKegcFJ7ZycRqWwZhJxOuyC42YvkdGd/hmsc4ePmE99U2hM1nTqG5podDubTpL5n8p4d0M8L+po3NmN3DLSMwk1aQb7vFJ1mGlwrA7c0lMixgRPWyeH0vnYmhGNGUSfu64neoRq0vOZGgOn0suKO9yzUMDgJI2EijOuz6x0EoxdQ+N+VrNLn6slTfVqiFVp3GtLd3gosVcoZHIL9Gi1Ocm44nEnyiSvvPSxQ+ne9B/JVvvZNrc70j8Yx2Zq/Dyfk+VY2xa0xbqdDNrymeqLrY0k/8x8BGkcZURn0YPdPut0Xx5KP1hGDLWSWUYuf6cNSdPsi65sqL2ean/azU3Z6fRl/0d7RtLag5PQs59e+LqMtLjSvjtSB255VO/zv9ngBECDKbiYgJfWZetBS1pp/x9mby5MNgbphXb49YbevOw3QC1pPexKfziSvppKzG1c1P18wS19spUe34P4y/xSY8oU5NGVJ9kQJCuYOSfeOLv3jrNx44xfGz/X3D/8mZqf2dIL7xVWTP/6yEDYMQR5vtvXy4k69iKaScTpsAuOr97K0N6DVrAHy49Fb5I0swuxDlwd9+MvqpNk/t39KMEM5s1u+MRJbhk5eXXL16SXuEeiw7a5O787aZh7+j8dtrYh1LM7ZpbIN4a6Y9HLVuZ7P62suGfp0EzbEo/PLx9jprLSOZJPBq/xQU/a9xqyFhhjtV9vZxh0apncAma5bVB+vYrn0SmvsG5NZPSsIzt3NqSuN16rPWfRVgE/z6vSSGt/9Xc9fy/kjPUlHptf7p49QxnGZ8XtzJRmldMsekgYHJ2u80RvZhm5/F2dZ5jyyUgGnR3ZvBk8eKL2zakHWEyeM3Rk2hEMjqUJw+yIn6fD4DgjKd1pWFPW8UqRn8lYh5OZ+aHe3JjYt6CcdyGD447c82NKrZBB5Ymv06v7HF15kjySFcycE2+ck8tQ5jw1+6BG2541NR6/Nj+PqevCQYeWPgOTTC+KxaTls0hvAP28mP1ShTWTaLQcdsGySnqHaOKyfmpz7k8rJ2OYTSfJ/Kt0gj0OapkuJTZ3GSp2D7OEE+y10Dp09wGcdAbHWYaKpT8dyJZaqjFT62pfjnonlNp3ovbZGGNovr/Tl6HaWG19r+6XWn9U3c7sEKzyCco71SQHsZt3cB2bT5E4xvuyo17YafZsJXRoYn5XBsekF/xUT+YseVK76y+H+rqKZg7j5Wpdm6LJzHNT2mN/Sdfs07LuG54bmA5LE/PQQ7rBcTbcn6CNzowxzNdU1BNxalOz2a+XreegHTSbsdO0k/adldYsZVKlc0+HwXFmcKZ6FBff8OpDH8vgq7Z0854a0Ne6GxKnMn3Zl6Z+fFmtO7smIqhYk4E0l4rM4KgNlEviWctqoSh+HEj7YTf2Gu10sfuNf7Sc4ccwebolS6pTCN+86nTAqaLNPsc1KTrulA2h7uZYlR//2gKNnIopqGCuMZglvZBhmMegYocbVaMG0OdkNQ5FNZNoCFx2Y/1Un705N4zroCc7nYH/FFwYo//U1NKDtt7DZGYDzDWz6MRc4/70n5rakPZXjeRTVWYfg2Ui/OuDjsF0qBkb4LWxVR2v/ULHIh12mH+flzYt4XeBlr/ZlJrzZI5qxGvehjRu1WL7vfT3tTVZW43vA1N5Sa8/sxmctLrq70+LnhxKe0CgEB+H1+T7vrTvdcLHmk156r0rhnNCh0bbxQ1OqmlzysCkbf9U2wJa2/1wFtA/FswOmpkjrata+CSbff3+4x3pPB0FS1qz1heTz+Dnjz1p1LKfjj3cbWgTEC4bzUkPUftqx+PM4AQG/ThtdFyzYxl2WrLj7nly9685OoqYFzc4bvur7jEp/DZ6m0U1fn/HBsd9iqonnQcbUlePisbMwlh6t9RjrqvSfOK/2XI82peeXiNNMS+xSp1y7Yu+ND9S9zOP2gWN8qUt6R6M/L99os+py7J6dM9U8rzKZCrmR03pBfcYHfT8dBwDERe7EU4wMlePwj/w36rZ3d6Q+ser+mVTUQV0O2Bzvf0z+xzX4EytuPsv/L/5cvhsR68PJx7B16ahJqufq/j2nQbRTl/9HsyofNTUj8OHfydllvRi5aju6Vds9Yhv3eJ8uNeUVd0oRvuF1Ns9C2km0bGksAv+tMby79oyCBkFj3W6T72oNyHrfTLKJJh9JBYbK/9H6SRq0KzrNROfrdoY6Ta26prhH/3XGzQ6Q/0WWFVX1CP+scfa1X2CgYRaitCPBz/pSPPaBVldVa8DsPbHZTa0blzB55BXR4YjpamR7O9u6XqdeApHlWnao+FBY+/Zm7gDPaTXn9kMzprSUFieIxl2/Lfnxh4zPy4fl5fhcbcnh+pPKozHYjS7ZJ5KSujQsC1icJTWVZvZ0OU4eBHt5fDfk2O9gylRp6aiZmf02+O/CvQyjnhE7Y6lq4eDIB+HMnjoc7Nn5dSAVP0pmqL1xdW5/zh69FqGwXAoA/XaDv0ov6dfAmg/yTUPPbhPUfV327J5Te1Lsk3d8dvoeIymbVqX9jDob0ZD6ehH8602w9VRWHYFDE5G++s/xVqLzaK6/Kv8+R0bnORTVPrlUQ96cmg9ReUDH8vggf+q7nDH+/k1adm73EMBmMYh+Pnaf0GZes23ufbc6qZ07ccifxzoR9TNca+mXk53mBgtxoXqpOPeQ71W/Fry75lk3uPlvrR/478gT8XhX+uLOWpoUjrgRL6zz0kYHHVtIu5zsqoemU+8RHEiw21TBvF9EWmVYjLc0e8qUnmJjS4Lp+fwDWfaDuVQvYxOd4yqTNXadSf25F1hzSQ6lnR2+sWC+p0kRkNZaSqe/tMgnrU3JsYnkX9T1m5+sz/rmaCUzt9PZyL7HZuP2ty6LjvPk0/JjL9pyqre7KheblnXfwdu3/2THJkNbXZ8CV61C7K+nfZ3hIKZAmvjtCk73Wmn/EmTzPpTJM6wvG0tqzJVA4vkXr5j8UmJQ71UUL23K2xfPKeOhXG5TIsYnKlMX/Rk0+jT2sOiWVnLOTEdhu1GUi/qRafJ8ko7z9eMbThUGonyz6yjbn6Dz6O+tNajtlBxUxpOe1HpPPQQlYspI/UCzk1pu3XmmG10IsaX+9K57W88N2kn6miKjvzyK2JwppLW/o6+Vu+Ts01bBv9QG9U6/s4MTnpFKwLT/yNixf/oo3XP18G1CfMUnaP/aN14HH/CYdbC1n+UcizjnHRy86/idF7mlnv+rPFlnR/85d7Y0x2p505mylvmlGjh9KLyiXEw5XkkqxNoxs2/+evGR6aZEbN9vxPoRC8NkDdVAAAgAElEQVRvHLkvrXi+M8vIjvcYv4f16RjXxsr6bV0flMFRmp8Xn5BHYvBQQC9FGLy0n35SLxiM76PLZ2r0Yt8jPa4wH0fFdNL6Yur4cdvSo+I77vF5tdGmDXiLy0YJ7b4t7R2X5Tu8rgQGJ73C5VdcroFPhTQQvFzN3d9DGVeojOfS6PsPT6Ttw0AraOUsagCDM5eGhcpzFivPW8/zuCdb5v1M7uvx0W34tvG3Xg5lYa2XvaIXO8KFdvmsawCDU5bGizjPXIc2+a7rv2dp23qdOzo4czoo3Enp5Z1ow3Hh69AUmqqoBjA4FS1YGjdGb2gADaABNHCWNYDBweAwekEDaAANoAE0UDkNYHAQdeVEfZZHLOSdETsaQANowNcABgeDg8FBA2gADaABNFA5DWBwEHXlRM3ohREsGkADaAANYHAwOBgcNIAG0AAaQAOV0wAGB1FXTtSM3Bi5oQE0gAbQAAYHg4PBQQNoAA2gATRQOQ1gcBB15UTNyI2RGxpAA2gADWBwMDgYHDSABtAAGkADldMABgdRV07UjNwYuaEBNIAG0AAGB4ODwUEDaAANoAE0UDkNYHAQdeVEzciNkRsaQANoAA1gcDA4GBw0gAbQABpAA5XTAAYHUVdO1IzcGLmhATSABtAABgeDg8FBA2gADaABNFA5DWBwEHXlRM3IjZEbGkADaAANYHAwOBgcNIAG0AAaQAOV0wAGB1FXTtSM3Bi5oQE0gAbQAAYHg4PBQQNoAA2gATRQOQ0kDM7Llz/Lq1evK5dR3DxuHg2gATSABtDA2dFAwuBMp2/kn/98icHBzaMBNIAG0AAaQAOl1UDC4IiIvH79WpscZnLOjtNlVENZowE0gAbQQJU0kGpwlMlRMzlqueof/xjzHwZoAA2gATSABtBAqTSQaXCUyeEfBCAAAQhAAAIQKCMBDE4ZS42YIQABCEAAAhDIJYDBycXDQQhAAAIQgAAEykgAg1PGUiNmCEAAAhCAAARyCWBwcvFwEAIQgAAEIACBMhLA4JSx1IgZAhCAAAQgAIFcAhicXDwchAAEIAABCECgjAQwOGUsNWKGAAQgAAEIQCCXAAYnFw8HIQABCEAAAhAoIwEMThlLjZghAAEIQAACEMglgMHJxcNBCEAAAhCAAATKSACDU8ZSI2YIQAACEIAABHIJYHBy8XAQAhCAAAQgAIEyEsDglLHUiBkCEIAABCAAgVwCGJxcPByEAAQgAAEIQKCMBDA4ZSw1YoYABCAAAQhAIJcABicXDwchAAEIQAACECgjAQxOGUuNmCEAAQhAAAIQyCWAwcnFw0EIQAACEIAABMpIAINTxlIjZghAAAIQgAAEcglgcHLxcBACEIAABCAAgTISwOCUsdSIGQIQgAAEIACBXAIYnFw8HIQABCAAAQhAoIwEMDhlLDVihgAEIAABCEAglwAGJxcPByEAAQhAAAIQKCMBDE4ZS42YIQABCEAAAhDIJYDBycXDQQhAAAIQgAAEykgAg1PGUiNmCEAAAhCAAARyCWBwcvFwEAIQgAAEIACBMhLA4JSx1IgZAhCAAAQgAIFcAhicXDwchAAEIAABCECgjAQwOKbU3rySV6+C//8yX/ITAhCAAAQgAIEyEkgYnJ92r8vCwkL+/y+GOq/DL8x5i3Lj8U+J/EfHF+T6bvJ44gL7ix+6cuO9q7J98Mb+9sjf7TSz8hGL5Ye+tP7vL2TRyfP5X96V7n+/OjK9tBNGuzfk/Nq2HMxklIbScmJIxn9dun8Xkb935bo594OWJBDZxxda4pdWWqR8BwEIQAACEKgmgTkZnAVZWLwhPdX5Wv9ssxEzFdY5qb++OZDtlcA8Lc5mcuw0kwbBv6eJ5c1327KyaExa2s/L0vp2NoP15mBbVgLzsTiTyTmmwVlYkMt/OJBYlBicVFnxJQQgAAEInB0C+Qan0ZbBYJD8/z/+zIZrJhZ/3ZWRxc4+bkyFdTj/138NpfXB7CbHTnOl2UvGPhjIwd+VHfhJur+KTM3iWkt6Oq89af/2cjSD9ctHsTzlB+0fffNtSy7PbHJsg7Minz1O4T44kJ9U6DEDo/LgGLHYcWZwipQZ50AAAhCAQLUI5BucYCkqK8u2mfBnSxblxm5kcezjMxsclegxTE7xNG1DcVna/2Pn8kC2V1dkRf/fluH/2seK/T67ybHjCZaispKKGZjApH3QkqFZEosdx+BkYeR7CEAAAhCoLoE5Gxx/qar7gw+suNnIATyjySme5oG0rOWpy//RlYOfc+I4xqHZTM4JDY5aqvpi6C9VYXCOUVpcAgEIQAACVSKQb3DeuxzMYpjZDPXzrvSD/cK2mVj55dVoo+6v/KUq+3j2DM6BtMPZEjsd6/cPzkdLRosrufti7DQXl6x7hGm05SAowYMvraUos6S0tCLXf78t3W9/iu9rSSv179opfOJpXn7PWgZbsWZZEvezDc6i/GIlfh89m/TnIPKYgbkqV39p0rgonz1/4yxhMYOTQM0XEIAABCBQeQL5Bsc8qRP7GS2f2Gbi+u4LGTYvhkbk+tcjiR/PeorK7thNR33Ez/fuysAsxzhFZKeZvsnY7vBfycGfr8v5WP6itBdXPpNB3qzOt60wv+lpRfcyx89vDTKMUwEOZskwZnBaMvzBeqrq4mcy/B/rM09ROQrhIwQgAAEInAUC+QZnhhkcPUNjLyctXJbLZpPwQt5j4iPp/2lbtnP+txorkZFYvCFmCSytgGyDc9QMTnj9m1fy4tuePGreluv2bJEyPnmbjP/Wz417+08t+dQ8DbawIHoTduZ+HtvgzDKD4xs29Wi6edR98YPLcjE0bbahC3PMLxCAAAQgAIFKE8g3OGbGIAOBbSbMEpS978TMWqif5njGrbK//qEvt41ROsLcqJukxZR6859fRE9YBU+FmfNe/Xc7es/MwvVcQ2WuSfs5+uZ29DSVesIs09yoq22DE82Spd03/hSVMTAj6f56MTKCGJxUdHwJAQhAAAJng8DcDY7IGzn4Q3Jvy7EMzozmRhVZYYPzXSuc8UjO0gzls9AgLErru9nFMJu50ZFbL/o7jsFRj4/35Ia1cdo3mMYAzZ4HroAABCAAAQiUlUC+wfnl7fQlmG/8R8GzzcSBbJtZl8AozGxwjmFuVCHYMV389Wep8T/69pXIm4F8ZpmB87+6K4+eDGTw5JHc/ZW9qfkzGcTeond0Uc9ubnTklsG5KDf+M23Z7pEM1SuI3D04VkivvrkdGTfNHoNj4eFXCEAAAhA4IwTyDU44i+Fslk38qYaUJai/PZKr1vUzGZw3I3lkXsJXYFnKLivb4NhLZPbvJpbR408zNxj758ff62Onk/X7m789Cpe38vfcuHewl6gc3iHHYGYnx+CoGbTBlr1UhcFxSfMZAhCAAASqT+DtGRwRGf3X1XBPiDEVhZGqGZyV/A3FafeaxeCo61/9dy/jb1HdlvbzY/4tqm9uy8qRe27c6OdlcNQLEgdyN5ydwuC4pPkMAQhAAALVJ5AwONXPclYO34R/TfxN7mbgrOv5HgIQgAAEIACB00IAg3NaSoI4IAABCEAAAhCYGwEMztxQciMIQAACEIAABE4LAQzOaSkJ4oAABCAAAQhAYG4EMDhzQ8mNIAABCEAAAhA4LQQwOKelJIgDAhCAAAQgAIH/v71za23jWsPw/5ufMFeGQCjoxr7JYBCBCoMJBBWiQkWg8kVUsChEGyICRVAUggJFhqBAEBSbIkNAhIp6oxpmX32bdZpZp5FG8VgZTd5A0MFz+Na7nvnmXadRYQrA4BQmJQ4EBaAAFIACUAAKlEUBGJyy1ATigAJQAApAASgABQpTAAanMClxICgABaAAFIACUKAsCsDglKUmEAcUgAJQAApAAShQmAIwOIVJiQNBASgABaAAFIACZVEABqcsNYE4oAAUgAJQAApAgcIUgMEpTEocCApAASgABaAAFCiLAjA4ZakJxAEFoAAUgAJQAAoUpgAMTmFS4kBQAApAASgABaBAWRSAwSlLTSAOKAAFoAAUgAJQoDAFYHAKkxIHggJQAApAASgABcqiAAxOWWoCcUABKAAFoAAUgAKFKQCDU5iUOBAUgAJQAApAAShQFgVgcMpSE4gDCkABKAAFoAAUKEwBGJzCpMSBoAAUgAJQAApAgbIoAINTlppAHFAACkABKAAFoEBhCsDgFCYlDgQFoAAUgAJQAAqURQEYnLLUBOKAAlAACkABKAAFClMABqcwKXEgKAAFoAAUgAJQoCwKwOCUpSYQBxSAAlAACkABKFCYAjA4hUmJA0EBKAAFoAAUgAJlUQAGpyw1gTigABSAAlAACkCBwhSAwSlMShwICkABKAAFoAAUKIsCMDhlqQnEAQWgABSAAlAAChSmAAxOYVLiQFAACkABKAAFoEBZFIDBKUtNIA4oAAWgABSAAlCgMAUyDU4c/49ubv5Lf/+9xH9oAAbAABgAA2AADOwVA16Dc3t7S//8c0P//ntLcRzjPzQAA2AADIABMAAG9ooBx+CwnhtmbmBsYOzAABgAA2AADICBfWXAMThsWAo9NwB6X4FG3GAXDIABMAAGGAOOwWFzbgAH4AADYAAMgAEwAAb2mQEYHIypwtCCATAABsAAGKgcAzA4gLpyUO9ziwOxo8UMBsAAGCiGARgcGBwYHDAABsAAGAADlWMABgdQVw5qtH6Kaf1AR+gIBsDAPjMAgwODA4MDBsAAGAADYKByDMDgAOrKQb3PLQ7EjhYzGAADYKAYBmBwYHBgcMAAGAADYAAMVI4BGBxAXTmo0foppvUDHaEjGAAD+8wADA4MDgwOGAADYAAMgIHKMQCDA6grB/U+tzgQO1rMYAAMgIFiGIDBgcGBwQEDYAAMgAEwUDkGYHAAdeWgRuunmNYPdISOYAAM7DMDMDgwODA4YAAMgAEwAAYqxwAMDqCuHNT73OJA7GgxgwEwAAaKYQAGBwYHBgcMgAEwAAbAQOUYgMEB1JWDGq2fYlo/0BE6ggEwsM8MwODA4MDggAEwAAbAABioHAMwOIC6clDvc4sDsaPFDAbAABgohgEYHBgcGBwwAAbAABgAA5VjAAYHUFcOarR+imn9QEfoCAbAwD4zAIMDgwODAwbAABgAA2CgcgzA4ADqykG9zy0OxI4WMxgAA2CgGAZgcGBwYHDAABgAA2AADFSOARicHUG9uh7T4LxN7ec9GnxYVg6kfW5xLD+OafxnCevkZk6TdxOa3xTTmtnnOqpi7KvrCY0v5rTaUQ6qoob3WSbUz/7nnZ0ZnOmLgILA/R9+V6f28LLaF/msR0dBQOF3EUXHDRr8uQ04CxqeBBS8mO7eFK2WtFyucp53Rcvlkla325Rt07ZT6jJmiiz7pyE1goC6H+S5VxNqcy67NC3ZjWb+us6vmcZwkbMONumJv9/nDZEf2+Yrk6k5DeosHzZo+Cmtl9VySctV+vne482M724x3Eu+/9A19cqttV6WvPnUXz/xzTY5UT8v3n8NlndscJrUv5jQJPk/puHPdQqDkE5/r24Sn/1aoyD80hto3guy+AtoMWxQEOSM+4uSzaaYd2Bw4piW7wc0eF/GHpxLGr0e0SV6cPbH4G1xHayuRjR4ozfuBO9VMLTC4BSc73dqcGJy6ycmXq6TIS3uyRh+DRNQ5XPu2OD4bpYrmjwPKTge0Lyi0NztooDBudcenIoyV+WkVeqybWFw3HJUzeAUnO93bHDc+oHB8WlS5u9KYHBiyuopWFz0qXVyRAdBQAeHDWq9nNAi5xBIrn0XE+o/a9DRg4CC8CFFT7s0suZizF5GFP00pvn1iNqPH1KoD2/cLmjyskWNwwMKggM6OmlR/0LriVqMqX0cpcc/ZkNUbRovYorl3/of7V6MGfWPI2q/VcfJY3AWNP4poujljGIWZ6JZk7pGC1Gey4g7pIePmtTT445FDNF3oSiXHrfHEHCNHgltDg5ZGfX4Y4o3ns/WQH1Oe3BWV8NE/4PDjHLFMW2sd+cGpGmnl215SaMXTeIaeNhY/dGh6LhJw79UrOnr4k2LouMOTVTPi1F+Dyf6efX3H/spL+x79fnTkmYv09jqz/o0/Zye3004K5qcRRQ9HXoaEQsaPYsoOpukw8RGvD4+tFgYy3rMNtfJZxEzv9Y2tYA3nf9qQKeeod75b01vncx+jSg6HdDlnTSU5fTFpvf+aXzN33WpyXNDSA8ft2mYkVtYb8DibZuiY5HrxFC2vJ51bfO817nlOalNwyt3mHn554i6TyN6GIqh8+aLEV0urbpkvZs5tzMYiKURyOj9tfO9yrF2r4jQpE8zVe48BmdT/cRaPr0eU/ep0lxMlViqc8UxGXF5c7kWm7afrQU+u1ztSpMSGJwFjZ4GFFhJbz48pTA4oMaLIY0vJjQedqnxIKDwiS9JmwKKfUOKfujTKGPf1ceumBfzuE2DN2zYbET9p0cUBEfUm6XH470vxw1qHD6k0+cDGr0Z0Jjf1OY0fBJS8KBB3eGYJhdjGr5o0AEbbhvORdJfLWh2MaH+DwEFxx0ey+RiRgs2xq4lQrOy7VacdkFmXkRym6ctan2n4lHlsYf/suIO6OjFVN7klnR5MaHRWURBoLqZZdyeGJZXE5r83qEoCKj5HzEEOfukkmqe86V6+7QITk7p9EFEzZcjTWc9XrF/LmYc3T363kypexhQ+KhJfcXGDxGFwRF1P8hyybk79deyrhNd5Nj9szGJZJlVfo2TZF9LBzuhy88tZkgStjtUDwMKDrs0WzN3Y/VHm4KgTgPbkP01oHoQUOutGqLLitfS245NlcHWV35unDTooN6i3nBEw+FUamOVlx8jz/mn1A0Dil7p2st6DAIyh3hm1KsFVPt1Jq7JO2gYxwsayWu+x7mY0OgXMcTe/kNyIcvbet6mhyddGr6b0ORNn5qPWGPB1F/v2V19mtHkok/NIKDojHE+ocmVqhOfTp7vVjPJbZsG7LwXYxr8yHKaed7VB5b70vyYxHfYpaky5XFMebczr1kRFy+b1+C4+V7XQT+WbYRimzmbtTz1owzOj21q67mSX98B1TWmjLi8ufxyDcee+lHXCF7NRtE96rFjg6Nulmoezoh6JwcUPmrT+FoDgiddzw3gWkwQbb5Zc+Fn7Wslct7Sfz6yWrSX1D8OKPxFJsOkJaLd2GRl8Amg4SkN9bjjmOa/sXkrTRppLWrjQlGV6Vycqvx3MDhswqIRz1xMUP4+NYVi4qq9XUys14HdwHVz5yQYFbvvNaM825xPT27ivezBsRI0+5swM0fUVxO2s+rdZsaJ0zY4K5r8HHLDoCf7OF7R9KxGQa0nW5Ryu7o1tCrj6FyIG942nDjltxM6/xxQ7UwZUcHM6n2HakFAnfeKIc/rakKdMCDbkIn4OjSR5ih3fdmxKSZsfeXnPA0TXq98YvVmPqcvQrNRtBxTK4iocVKjIDGXMcV/DSgKainXd9FQsmTqvKLZ6w4NVA+oLC9rsBlD7jcTaoea0VK5xWjY2de+px6Vzr7X9x0xCdfIAQsan/fSnmnJQdqYkee4mVKHGcFzmfvybueLQ5UtaRytz/fe/Mh6tew5gDZzNmt56kcZHCdXyqkSyfXtH47KitW5djN0wXZbMn1HHXdscNxVVKzHpHluTqScv4oy5uQwCAMzgVkC8H0zJvRuXqGwovGPZm8SB9qZHzSnwbHdgpQVJ1v2aYvYf6HcSw+OntilLsJwqbFwEXfoXZVktXR9CcbS2rhY7WTDt93ufMbx+P7S4HjKFcdmKz43M06clsGR9Ze0yPUye26WYRAZvSKmYdiOE6f8dkLnn7WbdRLblDpOz4WbSLgpMFgWvU3hz2p4aov6smNTsdj6ys9ePdU+yWv+84seqbQhwT+zm9OHLoVBOzFs/Cap54O7aLgY0SnT2em107SW5dWvf1GvkrOzdDWke7O8o8H5yMpeo85FdgNQ6JbqozPHryF5g8+7nb6//p6XzbNq1pfvXR2EnlsbnDz1owyOJ6eI83WS1ZS+uHzf6eXGe+1aSK7rr/fdjg2OutGqAq9oORtS65B1sauWsTQEbN4Dn/sh5nSo95vG8LcC8PMlTd8MqHfepiY7F59z4jE4RiuLxS4SUTJWbsQpxnT1bnJvTPaNIIHBTnLWDTjZTmnIXrO3MZOEfWz9GDFNzwIKnoySFQLmvua2zoXsLc9253OOKXVOhheMsptl5hrnYcaJ0zyOMp5qLpHijr/KeUbJEvNYmMJ0qETcoJNW8JacOOW3TYT9OdFjvc7JcWc9qumGzDZsMl6d3WRf1irX+ciKxdbX/pzE7ONpfTmM81sNCWbeuHHnPQ+hfAzAksbPAkoNXEzOMEcSz/pzCx1WNP2FDVWyeSt1ap0NaDSbm49GyCyv5EzLJW5eyBODTzf13ZxGT9l8QDZnsUntl0M+zKU/Z0dc0wd0ZOQsmWN5/hM5Ou92Oh/6e142Z4hqTb7XdFHHcfKPzZyjdY762Zgr02X7bv1kNFYThlQ94FXV4dd+/coGR4Igh49UK4+DVTulznmPer7/r7PH8H1Q+kSe/97kk5dFoupR7/WIJrMFTVgS1y42//FEIqo96fjjOzcf5uc9hnNxqovCTnLWDdh7MWVvYyYJ+9jqnOLVjtPc19zW0dRbnu3O5xxT3nBTA6HHIMv884SP5/LY8zDjxGlpJ/9ef5bB3nlPzsESsfBHAKhu7QzDkJcTp/x2Qrc/Jyys1zk9rtlLp7fYxTbrj2PwkRWLra/9OYlZr0v1fovzx8K8BM9Z75Mol8gfopeXm2I+bBWQyiu8jFlxbzB3qYYxLa/GNDhrUUMaXjYPrz8z5+CkJliVTXK2NresL78eQ/b7FS0+DKn/vEkRn+DM5pKlUwDENV2nli+v8u/GfGgt73ZZcXBWHIMjtfDle00XdUwn/9h1l8HW2vqBwdnZ/BdVj1/ztRwGx0ou/KZhdKWrJLH51bjhZCbTGZ+kaM9lYBXBW4naxWYk9eR4dst9fVzeY8iL0xzPZ8exk5x1A05i0M+ZvY2ZJMwbnAnenIbfBxRoXejmvvr5PO+9yWa785nxpFqIm5h9TnFsNV8qNzNOnJZ2vpuiV3MZD0/WYtjIjWE7Tpzy2wnd/pzEZTNja5V+5kNo3JD5YtuivlQsxnwPz+R5R+80Fqe80qj4e+xcPpdvW+L5Uldsnk2LxnIVEP/+eEAzPrE6/Z6fT8WtPVxPxJFfQyPu5SX12YM41VyszPLuyuBY+rLVP7W0F4trow3hGWVJeIop73ZZ+681OFaO49tq8wTVMZ38Y9ddptaaBnb9wODA4CjAinxdC7zl6GPelW5OeBWxLGn6akCjdasL+L416ryXLSp10d5MqMuXL7Px6YxktppSt5anByem2XnNGFZLtPo8pQF7ONumScaxmDeRDmeIi1JNGE2HCawbsCqP8Zq9jZ0keNy1jrFagsfO6yAkNTmWfSf29Y/XJ+VVcchkY7SW2VJLplPO8znHlPUUhC0aa3qy7YROWrx5mXGSoq3dkq/qCz1j9PHVmPrDqVgFp8odi3kstfMBX61jT+LdhhOn/HZCtz8nMWTwnPxdS/jKkL1iw1Xm6hp2/tz1ZU/elucSc760J0U7emuxeOLLfX62Lz92jVrPGua8PP59nRrfh+b3bJ87aLj6NKHB2TBdtqzKzOYMqt6KzPLmNzj+Hsv1urG6Y8P9vZcTa2WPnLeoGm2fR9QMQm3VXHrcy7d9Gr5fiJWUebfz1CGLZZt871uYEcdygYTS1Vd3lta56qcIg+NrfLOnvq9Zxehc2xm6YbuUxyK02HEPjr2KakzD8yZFfJlrOgcn5t3PIQVhnbrvLmnBHl++uKQxX5LpMS8GLKzr2tr3ekLdx+y7tnw2ibiJBYcdGl0t+E8MLPk2ER2x+UAqGagLVfuciP55TC32DInHXRrLYyyuxuI81g2dX+zOMeQqHLYU/lwsCx2xZ5t8X+fLdu/L4MRa3JNr9tjxJc0/9KnpW4LPTUNI9V9YfJuWRMoelcddvhw+MXjbnM+oRwa6uHFHT04p0nSeX3T50uhQmy+UmxkrKXrnL8mf1jj6cUDTRKOBmCtmr45hN5Y3TfkzJOn4/ZdwkuyjdLBvxvZntV2WYU/+ricNyT6bAPp0ZN0MY8rPh1wOH0bUfp0+JuFhvc4fF5AM0Th667F43m/FizQNzgRrMReKzUVxVlzeRUPFxc9jmrOcxK4dyWJNPWIhs7x5DI4cdjts8WXe0+u0kSaeCaM9W8lTt6zXhT8V/tVM5MzlgmbDFn8cRppPYpr9Kh6H0Xo9leWY0/S13O63dOl93u0cblXedFZRZeR72cBlQ2l8efu7IXVPHlK9rhlHVl677myt89TPHQ2O6Nk6ohZj/oP8HTG54ix40KEpTE6peoh2bHDcVVRsom7zfOz5QcElTc/Zc2W0fR40qKc/VMtzkfOL7XZOo+d1Y9+DeptGenf65ylfop78PhZL1G/nzqO4/eZEJmb7GGxy30nPeeha5jFuLmnwg3hAHp8YyPe1W+N2D4PnprDxorUmdztxH1CdLZl3HqK4otlLVQfmaiFfUlvN+vxZRawsRis09/nssikt5jRnD0JjRpjzEFL009DzEwY5mLGTYoZ2/MGC/Nklm87JTAFrFWcYBsaoU34/J46mdkK3Pyf8K53UwyFtHc3PypA5N391PCfeDD4+T6hbF5NaA/ZcFVYnV9ZvfTl6m7E4ZfbqlXF+9rgA3nvissmHC63feeLnuqOGy/e9hHHBohVbZnnzGJyY4usxtRV3fH6R0IvnEDXXS9WT87qiy6F+nYgHmJ6+nFlG1rdd5PlNwLzbuXXK49Vzt3yfle+Xf3Spzh64yrZjuXh4SZfbLhPnP7uyoX4yrnfGhuixThsp/rw95w9VZRPNAzXUJ58/pC+U8XLt1JerG/YrVpOdGZwvrzjxI475f/RRE+hW7qs9vEb1nLMAAANwSURBVMqOgy8dv+uPRPIfpVzScs157PMan1mcX8P58x+Oy/MDmautyrbK+oHO3OfT6lBPCqo+N2p1B2b087H3MuZC6ueunNixfeHndUuADS7z1tcqD0MZdbquDHnPv+4Y9/S3QvLGuthulumTpfkD7Mz5cUY9OcdR/OvH8OuflMM5hrl93u3Wx2Ue07dtZu7YEJ99rCRep9G2OQb7WN7PXytnb6mDN/Zv6Bh7YHAKAvIbqtRvHWqUf901I+Y2GEuncW2Uqlvd5VcsirDnt7nbrat3/A16fXsMwOAguZc8uX97F+X9JOJLGpxEYumw5wnc93NO1F0huvJhL2s1GPIW8hYY2MgADA4g2QhJIUkaOn9dndkPy/LnnAxoYv9AJurm69bNJv350Gw64RjXI4wzGMjHAAzOpuSCv5c7+aN+UD9gAAyAATDgYQAGxyMK3HE+dwydoBMYAANgAAyUlQEYHBgcOH8wAAbAABgAA5VjAAYHUFcO6rK2JhAXWrpgAAyAgd0xAIMDgwODAwbAABgAA2CgcgzA4ADqykGNFtLuWkjQGlqDATBQVgZgcGBwYHDAABgAA2AADFSOARgcQF05qMvamkBcaOmCATAABnbHAAwODA4MDhgAA2AADICByjEAgwOoKwc1Wki7ayFBa2gNBsBAWRmAwYHBgcEBA2AADIABMFA5BmBwAHXloC5rawJxoaULBsAAGNgdAzA4MDgwOGAADIABMAAGKscADA6grhzUaCHtroUEraE1GAADZWUABgcGBwYHDIABMAAGwEDlGIDBAdSVg7qsrQnEhZYuGAADYGB3DMDgwODA4IABMAAGwAAYqBwDMDiAunJQo4W0uxYStIbWYAAMlJUBGBwYHBgcMAAGwAAYAAOVYwAGB1BXDuqytiYQF1q6YAAMgIHdMeAYnJub/9K//97ipgfjAwbAABgAA2AADOwtA47BieP/0T//3OxtgeCOd+eOoTW0BgNgAAyAgbIy4BgcIqLb21tuctCTA3DLCi7iAptgAAyAATCwjgGvwWEmh/XksOGqv/9e4j80AANgAAyAATAABvaKgUyDw0wO/kEBKAAFoAAUgAJQYB8VgMHZx1pDzFAACkABKAAFoMBaBWBw1sqDP0IBKAAFoAAUgAL7qAAMzj7WGmKGAlAACkABKAAF1irwf/KNY3a32OY8AAAAAElFTkSuQmCC) # [참고](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038) # + id="ITulkdrUMIH7" # !pip install google_trans_new from google_trans_new import google_translator translator = google_translator() toxic_es = toxic.copy() toxic_de = toxic.copy() toxic_fr = toxic.copy() toxic_es['3-trans'] = toxic_es['3-trans'].apply(lambda x: translator.translate(translator.translate(x,lang_tgt='es'), lang_tgt='en')) toxic_de['3-trans'] = toxic_de['3-trans'].apply(lambda x: translator.translate(translator.translate(x,lang_tgt='de'), lang_tgt='en')) toxic_fr['3-trans'] = toxic_fr['3-trans'].apply(lambda x: translator.translate(translator.translate(x,lang_tgt='fr'), lang_tgt='en')) # + [markdown] id="GPBlnpTCczQc" # --------- # + [markdown] id="g-JqYpY4rXbN" # # Preprocessing Augumented data # + [markdown] id="PJKuztz9syCt" # ## English -> French -> English # + colab={"base_uri": "https://localhost:8080/", "height": 76} id="XyxsxpodPq5k" outputId="55a74540-a406-4cef-f34c-5aba84e9a679" train_fr = pd.read_csv('/content/drive/Shareddrives/SOGANG Parrot/Ho-colab/train_fr.csv') train_fr.head(1) # + colab={"base_uri": "https://localhost:8080/", "height": 93} id="Rr8UJMtic6ck" outputId="6e57bb56-e723-41dd-fc0c-bbf7ef434454" train_fr['2-trans'] = train_fr['comment_text'] train_fr.head(1) # + colab={"base_uri": "https://localhost:8080/"} id="TIV5a3kZnsH1" outputId="b11f935f-0624-4f78-8c73-848aa032027e" # 1. Capitalization / Lower case train_fr['2-trans'] = train_fr['2-trans'].apply(lambda x: x.lower()) # 2. Remove ~.jpg , (UTC) train_fr['2-trans'] = train_fr['2-trans'].apply(lambda x: re.sub(r"\S+\.jpg|\(UTC\)", " ", x)) # 3. Expand the Contractions # !pip install contractions import contractions train_fr['2-trans'] = train_fr['2-trans'].apply(lambda x: contractions.fix(x)) # 4. Remove URLs train_fr['2-trans'] = train_fr['2-trans'].apply(lambda x: re.sub(r"https?://\S+|www\.\S+", " ", x)) # 5. Remove HTML tags train_fr['2-trans'] = train_fr['2-trans'].apply(lambda x: re.sub(r"<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});", " ", x)) # 6. Remove Non-ASCI train_fr['2-trans'] = train_fr['2-trans'].apply(lambda x: re.sub(r'[^\x00-\x7f]',' ', x)) # 7. Remove punctuations train_fr['2-trans'] = train_fr['2-trans'].apply(lambda x: re.sub(r'[]!"$%&\'()*+,./:;=#@?[\\^_`{|}~-]+', " ", x)) # 8. Capitalization / Lower case train_fr['2-trans'] = train_fr['2-trans'].apply(lambda x: x.lower()) # Remove stopwords import nltk nltk.download('stopwords') from nltk.corpus import stopwords stop = stopwords.words('english') train_fr['2-trans'] = train_fr['2-trans'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)])) # + colab={"base_uri": "https://localhost:8080/"} id="nIMFx2KRoB41" outputId="98c2c13d-9528-4f28-8b59-340b11a933bf" # Find zero len comment zerolen_t = list() num = 0 for i in train_fr['2-trans']: if not re.search('\S',i): zerolen_t.append(num) num += 1 print(len(zerolen_t)) # 전처리 되어서 ''된 갯수 # 기존 comment_text 로 교체 for i in zerolen_t: train_fr['2-trans'][i] = train_fr['comment_text'][i] # 기본 데이터로 대체 # + [markdown] id="aKlhAXOxs5Le" # ## English -> Deutsch -> English # + colab={"base_uri": "https://localhost:8080/", "height": 76} id="o2hjHUMPdEHS" outputId="467deac8-aacd-4f6a-b2bc-1d60fd465b46" train_de = pd.read_csv('/content/drive/Shareddrives/SOGANG Parrot/Ho-colab/train_de.csv') train_de.head(1) # + colab={"base_uri": "https://localhost:8080/", "height": 93} id="uA8pY69Vdhza" outputId="033ed521-1617-4585-caf2-466753ba4e4c" train_de['2-trans'] = train_de['comment_text'] train_de.head(1) # + id="XqFhKDBRp67v" # 1. Capitalization / Lower case train_de['2-trans'] = train_de['2-trans'].apply(lambda x: x.lower()) # 2. Remove ~.jpg , (UTC) train_de['2-trans'] = train_de['2-trans'].apply(lambda x: re.sub(r"\S+\.jpg|\(UTC\)", " ", x)) # 3. Expand the Contractions train_de['2-trans'] = train_de['2-trans'].apply(lambda x: contractions.fix(x)) # 4. Remove URLs train_de['2-trans'] = train_de['2-trans'].apply(lambda x: re.sub(r"https?://\S+|www\.\S+", " ", x)) # 5. Remove HTML tags train_de['2-trans'] = train_de['2-trans'].apply(lambda x: re.sub(r"<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});", " ", x)) # 6. Remove Non-ASCI train_de['2-trans'] = train_de['2-trans'].apply(lambda x: re.sub(r'[^\x00-\x7f]',' ', x)) # 7. Remove punctuations train_de['2-trans'] = train_de['2-trans'].apply(lambda x: re.sub(r'[]!"$%&\'()*+,./:;=#@?[\\^_`{|}~-]+', " ", x)) # 8. Capitalization / Lower case train_de['2-trans'] = train_de['2-trans'].apply(lambda x: x.lower()) # Remove stopwords train_de['2-trans'] = train_de['2-trans'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)])) # + colab={"base_uri": "https://localhost:8080/"} id="2kZNuHFoqG1k" outputId="43cafa33-1faa-4352-ac99-398eabb4bd92" # Find zero len comment zerolen_t = list() num = 0 for i in train_de['2-trans']: if not re.search('\S',i): zerolen_t.append(num) num += 1 print(len(zerolen_t)) # 전처리 되어서 ''된 갯수 # 기존 comment_text 로 교체 for i in zerolen_t: train_de['2-trans'][i] = train_de['comment_text'][i] # 기본 데이터로 대체 # + [markdown] id="U7B_lZwBtCQB" # ## English -> Espanol -> English # + colab={"base_uri": "https://localhost:8080/", "height": 76} id="IVl03sGndlIy" outputId="55106343-b3be-4b98-f3c8-5e1688a81c6b" train_es = pd.read_csv('/content/drive/Shareddrives/SOGANG Parrot/Ho-colab/train_es.csv') train_es.head(1) # + colab={"base_uri": "https://localhost:8080/", "height": 93} id="DzwZAkIldr0b" outputId="60eadeec-b6fc-4abc-a5b0-54db0c5512d0" train_es['2-trans'] = train_es['comment_text'] train_es.head(1) # + id="bXkMTwYtqTLH" # 1. Capitalization / Lower case train_es['2-trans'] = train_es['2-trans'].apply(lambda x: x.lower()) # 2. Remove ~.jpg , (UTC) train_es['2-trans'] = train_es['2-trans'].apply(lambda x: re.sub(r"\S+\.jpg|\(UTC\)", " ", x)) # 3. Expand the Contractions train_es['2-trans'] = train_es['2-trans'].apply(lambda x: contractions.fix(x)) # 4. Remove URLs train_es['2-trans'] = train_es['2-trans'].apply(lambda x: re.sub(r"https?://\S+|www\.\S+", " ", x)) # 5. Remove HTML tags train_es['2-trans'] = train_es['2-trans'].apply(lambda x: re.sub(r"<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});", " ", x)) # 6. Remove Non-ASCI train_es['2-trans'] = train_es['2-trans'].apply(lambda x: re.sub(r'[^\x00-\x7f]',' ', x)) # 7. Remove punctuations train_es['2-trans'] = train_es['2-trans'].apply(lambda x: re.sub(r'[]!"$%&\'()*+,./:;=#@?[\\^_`{|}~-]+', " ", x)) # 8. Capitalization / Lower case train_es['2-trans'] = train_es['2-trans'].apply(lambda x: x.lower()) # Remove stopwords train_es['2-trans'] = train_es['2-trans'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)])) # + colab={"base_uri": "https://localhost:8080/"} id="ZyslKAERqdnk" outputId="789720d3-2a4e-4a71-d779-8c8cec420180" # Find zero len comment zerolen_t = list() num = 0 for i in train_es['2-trans']: if not re.search('\S',i): zerolen_t.append(num) num += 1 print(len(zerolen_t)) # 전처리 되어서 ''된 갯수 # 기존 comment_text 로 교체 for i in zerolen_t: train_es['2-trans'][i] = train_es['comment_text'][i] # 기본 데이터로 대체 # + [markdown] id="CjsqxP-KtLjN" # # Train Data Augumentation # + colab={"base_uri": "https://localhost:8080/"} id="l2MqZCWeeAYz" outputId="02e4a1fd-24c6-4660-914f-b6461247f750" toxic_train = pd.concat([train_de,train_es,train_fr]) toxic_train.shape # + id="h9Su1QKTexKC" del train_de, train_es, train_fr # + id="AGy3Y73trn4y" train_augumented = pd.concat([toxic_train, train]) from sklearn.utils import shuffle train_augumented = shuffle(train_augumented) # + colab={"base_uri": "https://localhost:8080/", "height": 333} id="IEvBujVdr1Le" outputId="6837b6e9-db29-49b3-e00e-16cf18380dce" train_augumented.head(4) # + colab={"base_uri": "https://localhost:8080/"} id="W9SeyAANslU-" outputId="0de1c909-6d92-4516-c9d5-a2e7c08d5dcc" train_augumented.shape # + [markdown] id="dind33JQvJsw" # # New Train Data # + id="AIGM67uSsRPe" train_augumented.to_csv('/content/drive/Shareddrives/SOGANG Parrot/Ho-colab/0408-train_augumented.csv',index=False) # + id="7RSWGpB-tWdd" train_augumented = pd.read_csv('/content/drive/Shareddrives/SOGANG Parrot/Ho-colab/0408-train_augumented.csv') # + [markdown] id="P0e6dr0usr2K" # ------------- # + [markdown] id="sN79Hfgtthx1" # # Tokenization # + id="iHAZp9_Csg-e" list_sentences_train = train_augumented['2-trans'] # (638284,) list_sentences_test = test['2-trans'] # (153164,) # + colab={"base_uri": "https://localhost:8080/"} id="xArdxKsYtpH3" outputId="ae03e294-a17b-406e-f977-3c3ea3f6db3c" max_features = 100000 tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list_sentences_train) list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train) # list_tokenized_train[:1] = [[688,75,1,126,130, ,,, ]] list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test) # list_tokenized_test[:1] = [[2665,655,8849,656, ,,, ]] # tokenizer.word_counts = OrderedDict([('explanation', 1771),('why', 17818),('the', 496540),('edits', 9957), ,,, ]) # tokenizer.word_index = {'the': 1,'to': 2,'of': 3,'and': 4, ,,, } len(tokenizer.word_index) # + [markdown] id="BSxWSLwhttFp" # ---------- # + [markdown] id="gNWwjZL0tu6V" # # Padding # + colab={"base_uri": "https://localhost:8080/"} id="uzdkHKKetsDV" outputId="f2f39756-3b20-4cc2-ea60-d9c4e3540ad3" maxlen = 50 X_t = pad_sequences(list_tokenized_train, maxlen=maxlen) # (638284, 50) X_te = pad_sequences(list_tokenized_test, maxlen=maxlen) # (153164, 50) X_t.shape # + colab={"base_uri": "https://localhost:8080/"} id="C596Wmtzt1zo" outputId="5938d65c-48a9-48b9-e313-b781839bba79" list_classes = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"] y = train_augumented[list_classes].values # y.shape (159571, 6) y.shape # + [markdown] id="w5pqOLwIt3a_" # ----------- # + [markdown] id="GvSLdgadt6LY" # # Pretrained-Embedding-matrix # + colab={"base_uri": "https://localhost:8080/"} id="zVSRN8fnt31F" outputId="5078b080-fc79-4b1b-f2d3-5c035bd0eb44" embedding_dict = dict() f = open('/content/drive/Shareddrives/SOGANG Parrot/Pretrained-Embedding-Vector/glove.twitter.27B.200d.txt/glove.twitter.27B.200d.txt', encoding="utf8") for line in f: word_vector = line.split() word = word_vector[0] word_vector_arr = np.asarray(word_vector[1:], dtype='float32') # 200 embedding_dict[word] = word_vector_arr f.close() print('There are %s Embedding Vectors\n' % len(embedding_dict)) print(embedding_dict['respectable']) print(len(embedding_dict['respectable'])) embedding_matrix = np.zeros((len(tokenizer.word_index)+1, 200)) # will delete first row for word, i in tokenizer.word_index.items(): temp = embedding_dict.get(word) if temp is not None: embedding_matrix[i] = temp print(embedding_matrix.shape) embedding_matrix = np.delete(embedding_matrix,0,axis=0) # delete first row print(embedding_matrix.shape) # + id="NJ44lkKruBul" np.save('/content/drive/Shareddrives/SOGANG Parrot/Pretrained-Embedding-Vector/0408-augumentated-200d-pretrained-embed-Glove.npy',embedding_matrix) # + colab={"base_uri": "https://localhost:8080/"} id="AWJo6Wi9uKJA" outputId="efcd3c40-8209-44d2-e2ca-72c80327e104" embedding_matrix = np.load('/content/drive/Shareddrives/SOGANG Parrot/Pretrained-Embedding-Vector/0408-augumentated-200d-pretrained-embed-Glove.npy') embedding_matrix.shape # + [markdown] id="_MeJ2zqwuT61" # -------------------- # + [markdown] id="m0laVTL1uWSV" # # Model # + id="nAJQUrJXuS8R" inp = Input(shape=(maxlen, )) #maxlen=50 x = Embedding(len(tokenizer.word_index), embedding_matrix.shape[1], weights=[embedding_matrix],trainable=False)(inp) x = SpatialDropout1D(0.2)(x) x = Bidirectional(GRU(128, return_sequences=True))(x) x = Conv1D(64, 3, padding='valid', kernel_initializer='glorot_uniform')(x) avg_pool = GlobalMaxPooling1D()(x) max_pool = GlobalAveragePooling1D()(x) conc = concatenate([avg_pool, max_pool]) x = Dense(128, activation="relu")(conc) x = Dropout(0.1)(x) x = Dense(6, activation="sigmoid")(x) # + colab={"base_uri": "https://localhost:8080/"} id="xX0KfwSJulH1" outputId="f80f1c54-d8e4-4a49-cc24-72e6a468a49e" model = Model(inputs=inp, outputs=x) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() # + colab={"base_uri": "https://localhost:8080/"} id="lu0-W7HnumKV" outputId="38b03e56-c436-46e4-963c-84149466eceb" tf.keras.backend.clear_session() from tensorflow.keras.callbacks import ModelCheckpoint #checkpoint checkpoint = ModelCheckpoint(filepath='/content/drive/Shareddrives/SOGANG Parrot/Trained-Model/0408-augumentated-pretrained-embed-Glove.hdf5', monitor='val_loss', verbose=1, save_best_only=True) batch_size = 128 epochs = 10 hist = model.fit(X_t,y, batch_size=batch_size, epochs=epochs, callbacks=[checkpoint], validation_split=0.1) # + id="49P3VHJnuumq" sample_submission = pd.read_csv("/content/drive/Shareddrives/SOGANG Parrot/sample_submission.csv/sample_submission.csv") sample_submission[list_classes] = model.predict(X_te) sample_submission.to_csv("/content/drive/Shareddrives/SOGANG Parrot/All-submission/0408-augumentated-pretrained-embed-Glove.csv", index=False) # + [markdown] id="m3Y1dwH93UVR" # ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABKsAAABtCAYAAABugOIyAAAgAElEQVR4Ae2dz29UR7r+82ewzTLLWWbLErHKEskrFOk7QsqGrMiVLrriohF4RkEhF7X4oVw6wJgZAzMdB8bYVtvY7ti+tmno0BaOmxjUYxNnMGm143Ri6/mqfrzn1Klzut1tu42beSIR9486dao+9Z7T9T7nrbfegfNf6dn3zju+JAESIAESIAESIAESIAESIAESIAESIAESIIH2EUjSot5xT5dUwP2er0mABEiABEiABEiABEiABEiABEiABEiABEhgtwgkaVEUq3aLLushARIgARIgARIgARIgARIgARIgARIgARJoiQDFqpZwsTAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkEA7CVCsaidd1k0CJEACJEACJEACJEACJEACJEACJEACJNASAYpVLeFiYRIgARIgARIgARIgARIgARIgARIgARIggXYSoFjVTrqsmwRIgARIgARIgARIgARIgARIgARIgARIoCUCFKtawsXCJEACJEACJEACJEACJEACJEACJEACJEAC7SRAsaqddFk3CZAACZAACZAACZAACZAACZAACZAACZBASwQoVrWEi4VJgARIgARIgARIgARIgARIgARIgARIgATaSYBiVTvpsm4SIAESIAESIAESIAESIAESIAESIAESIIGWCFCsagkXC5MACZAACZAACZAACZAACZAACZAACZAACbSTAMWqdtJl3SRAAiRAAiRAAiRAAiRAAiRAAiRAAiRAAi0RoFjVEi4WJgESIAESIAESIAESIAESIAESIAESIAESaCcBilXtpMu6SYAESIAESIAESIAESIAESIAESIAESIAEWiJAsaolXCxMAiRAAiRAAiRAAiRAAiRAAiRAAiRAAiTQTgIUq9pJl3WTAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAm0RIBiVUu4WJgESIAESIAESIAESIAESIAESIAESIAESKCdBChWtZMu6yYBEiABEiABEiABEiABEiABEiABEiABEmiJAMWqlnCxMAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQQDsJUKxqJ13WTQIkQAIkQAIkQAIkQAIkQAIkQAIkQAIk0BIBilUt4WJhEiABEiABEiABEiABEiABEiABEiABEiCBdhKgWNVOuqybBEiABEiABEiABEiABEiABEiABEiABEigJQIUq1rCxcIkQAIkQAIkQAIkQAIkQAIkQAIkQAIkQALtJECxqp10WTcJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkEBLBChWtYSLhUmABEiABEiABEiABEiABEiABEiABEiABNpJgGJVO+mybhIgARIgARIgARIgARIgARIgARIgARIggZYIUKxqCRcLkwAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJtJMAxap20mXdJEACJEACJEACJEACJEACJEACJEACJEACLRGgWNUSLhYmARIgARIgARIgARIgARIgARIgARIgARJoJwGKVe2ky7pJgARIgARIgARIgARIgARIgARIgARIgARaIkCxqiVcLEwCJEACJEACJEACJEACJEACJEACJEACJNBOAhSr2kmXdZMACZAACZAACZAACZAACZAACZAACZAACbREgGJVS7hYmARIgARIgARIgARIgARIgARIgARIgARIoJ0EKFa1ky7rJgESIAESIAESIAESIAESIAESIAESIAESaIkAxaqWcLEwCZAACZAACZAACZAACZAACZAACZAACZBAOwlQrGonXdZNAiRAAiRAAiRAAiRAAiRAAiRAAiRAAiTQEgGKVS3hYmESIAESIAESIAESIAESIAESIAESIAESIIF2EqBY1U66rJsESIAESIAESIAESIAESIAESIAESIAESKAlAhSrWsLFwiRAAiRAAiRAAiRAAiRAAiRAAiRAAiRAAu0k0GFi1QZqtZr592s7sexe3bUfFrH4r9ruVWhr2qiUsViuYGPXa2aFe0mgLeP4awXl52VUOuQa2UvePBcJkAAJkAAJBAQ2K1h8nMf8D7s/TwvO0e4Xv9p5ca3GOWG7Wb9l9dd+mEf+8SIqm7vcMdrkLgPdr9VtoPK8iPw3OeQeFrG8D26jbfGrdoS/87SLHXW3DQd3iFhVw/LDQfSkU0ilwn/X7+exvL4dKsuYUfVML2/n4OaP2Sgjp9s7g909UwXFrxWHfsxXm29OZ5VsboyWp0N7iNjG7WHMzK9gH9w3G2Df6TiaG+CGN8mozN3V10n//FqDc/MrEiABEiABEuhUAnaO4MwJzRwgjZ57Ocw3+ZCw9mzYzCtv5LHSLhQbRkzyf6t3erqN1Xnkvr4emRen0j0YfLi8z+c+O+15Zx6/Nt8fHSux3Rt3MPywjDVvLtdcL+11cH8eW874qvPoj/g+K8jfMHPo4We7M1umTTY3aq2WMr5OI1+yOZ+p1fNuVX7l0R2kUmn0ZDLI3C9idasDdvB9PX9PfL+Zl6rynfpVTTTw5UyT/vduaxdNtO0tLdIBYtUGlqfvIJXOIDe3jIqNrKq8LCKXSSOVzqHccgTJ3l3UtaUiiku78yPg2uDGagnFZ6tv8VO05sbI3LyGUSiXUQ7+lVD8ph/XUymk7+Wx0rJ9uKTb+3pH42gnHuYG7bTz11WU5kpY3cf9dlrLlyRAAiRAAiTQIgE7R/jrICYf5pG3/3LZO/q3P5W6g5mXTcSeq9/Lx/m2zNOkQyJSxH6rpcB2/v6Yxx0lPChx6psiSuUyFufzGFbz4lQKd6aX3+L54XaAvfljxA7uPgjtNT81irt/MWOWHppHpeVm2utgW2IVoHyU/ONdmi/SJlsevWYP2J9i1Qryf00h3e7ADwsp2d8Lfb9VG7yyI7+qmQFpSqxqh3bRTOPezjL7X6yyDnmi6r9ewnAqhdYjSJoTQt7OIe+UXjU3Rg1v4JV5DKZTSH9TfjsnbfXEqk4ZYraTBEiABEiABLZFoL6TvrE0iV4l5Hxd3Ibzv63GNDxIRIrdE6tqWHygImJ6MbnkC3J7EFnQsLf8sh6B+nZQwfx9NZ5pTJb98axXm3xe/zqQEsFfO2dsz6oS2mTAuQ0vGvo6+nzN+Uy72zRzztZ98O21YmsG26u35aOaEavaol203NK35oD9L1Zpo0ih3o/8hoq0kgiStUXk+jIo/OCPzwoKfRnkvpcg2fCi1iGr93qQTqVwXS0d86OVgjo3sDqfM09AdMiuDbNeX0Y+m9FLFK/f7kfumftcZA2L4xlkHkeDy9fKBYzeN0//0n/JYDghZLv2rxJmbL2pG3fQ/818JFJm5XEGmfHFWNhvvG4/tNhpU6WEnG1HYt99jPZ9rG0PCigLWl2mxXOoJ5vTw7ijwpHTPbir+xqOUZ1m6I+3unmZEP+7KP4UrcXlpMZt9LHPCcDmGsqPR9F/W4XZqzDXYeSTouRqpv0Z9XQs3YNMdgYldwlCYEM1rDy2/bRPwfxxDN7/uor5b+5qu9I2Mh3NKaDLZcRuM8i49v1DAZm+HBYjYwK4fTZ25/e5xXGLIuU7EiABEiABEtgjAnaOkBhRUkGxTzn/o1jUge32t62vgBU9ZzPzLz2vtL/PMk9bWxjVv6ejz7wfUAAr3/br7/J6uYnqZg2rz2YwbH+LzW91CatBML2Ze+q5gZ5jmt/qyBzVnT+kruNOdqaJ/FlrVtwYxaLMfx3qtVUTbSCRBvqrhPmMmu8GTZXjmynnMKst5TGs50jOEqVt9Uka8Pb+rS9WARsvcjoirveR9Rf0PM71WywXYR/M/8ProFJbQcFGFqb/chejj73loAlilZ5LNjNfnHbtOmmM9rlN+nZ9bxSF552T93crXwdI9pni837PJtRQutdrkg+TMNxr3+eQ6Qv9WOWDRHxSt051X7ufsDRbfJVKBSXr7zQSUrdmYBoa+FG23cF7fX0Yfz3x+lDlfTvJGF0gcp9sRqxqRbuw7XR96/hviS2E1ny52g8Fe3920gZF+qjGZhSFqBMfnmyfvNr/YlVtEaPpFHq/Wdx6PXfdSBNzEYfqr9zcBzF4I4PhxyWUy4uYnzZLxyLh06KOPhhFJlvQodalh1Iuj8nbdzE5v4hyuYT8fSNqhE9G7M3bCZFcWxhEOnUd/Q/VOcsoP5vBXS/6Z+OHGdxJpZEZL2JRlXlexOjt6FNCfdF6k7TKvK17el4ftzg/g34lAN2ewXIwobFtyo5i9C/9mLFtVz9wSpAZXIhP0FxbNe1P4U42j/nnqm3zmLynRJpRLAb5w1o4x6/LmFF9uxG2pTh+F9eHBr219W4rwtdb3rxs3jC3X/U4RUOw11AaSut25Z+ZiV9p9i7SqTRyL5wnXwntLwxnkHaXIFgb6r/fj+tfjyI/X8L8vPnB8MdRv//HJGaGetDvjWM6k8eKPbWejC5MIpNKYfhb076Vqv0y4UZar8+7aRvhqPAVCZAACZAACbSTgMzjknL1VDD/DyVWyQTdzklS/ehXESw37hjRST3Y9B14ee/NrwCz5CWVnoQJfrFzBCVCfT2qlyJO6t9+M+cyv9VbiFWVEoZ1zqDruPtgEnpJmH6fxuC8++DT5yhRLGkMFhMEJ7+4mqfY5YHqwWTOWXoWme+65XSfchi0S9SuZ0thlFrAqF/P067fziCj5ifqvNvuk9/ot+99I7EKS5Mmn5X4C9bZDf0WyyNgL3ZvrwM9b0zrh+7u+EbmtXKsnAOAmUPLdaLOsYHlWTWHNdfJ8DeTGLUP9KPzRX989rFNBnadRs9gDvmHoV1H7N/v0j56v6WvkyBWNeNvQrG5nUI6M4yC9nVKiPkwCRw2qisolwt6dVNmyvqzP6yZVSx16/Tua9rGM+i/f0ePS/FZCcXv69/3tmZgGqrLOfdv/f4foxjNOP7+hPLnUrgzuxKuvNlYMf5o+i5yc6ZPJRVgoMqJiKxOkeBjxRC1ol2oq+6l8vtTuH5/xvjWzwpmSXfEfwda8uXu96P/xl2MPpxHaX7eJr+vYN76tsb/F+3DG5tYh97sB/tfrFK/fd+PmrX5FvrijxUkJqq0N+J4FFYdsSoVj7gxhnAHhX/ZgbF1ph8sRp5ArX6b0eJORLjAKgrqaV7wQ7Bmnn4F7+0Eaiqabn3jhyIm58Iol+WpFFL/8NauVxeRfxg+2fAvRvxUxF21JPKpd6GvlbTYF15otk1qEhcpasOQ/fNG7LOG1YUZjD70ciH8uojRVMoRupo/h04Gnh702gJUiiZJeMgy0pDgzdY3L8tcxkBzSrgoKybxZLDc1L6fXApOpX/EV+YmUQwU6A2UJ9KeGKjKb6D8TRqpv9rJm9hQQj4CfxxNfxLG0S5pvDvnDFo9e/dvpHtiGy4nviYBEiABEiCBdhJoIFbZ38bgNxgyJzG5nCJPyaWszBFQQynrCl22Dz/m9dLC9IRNK1ApYfR+Bne93FArD3u16ODODZNFCuXcpxHLrbVZQVEvCRtGKXgAmMAxEIWUqHAXo1NFlOrMjU0SZD+PlXVaUr3I/2jql3L97jwDNZSnVJ/SGP3ekhNm6qFcJNp8h31K6Obb9FGyHageqvw2xm4Cxq2KVWr+Hxk3Gd80Rp974xbYeoJYJXmnEp3kFILIr6SB2Zc2CUgS8KgALHYd2n9Sl/bLZ1v7OvZ+GIxtM/5mfR9mWV3z4sPUheD71qqgrfOvk06QhKlg5aHKP+0ENlgbb1YwNAwmsWhzV9fcv0FAhrVpX6yKLZnegO5jajLcAK1aRuFBDiXHzVIt1/5+eibcgMP3serwaVq72ChjMu3fnwH8WkZOBeuIUNaqL5fkW6sNuJI+f6o2f9jiN6dOP/fi444QqzSI9RWUHoaJCFOp62a5mDvrsD+gTYtVngBlgC9jJp1Cpmit1dbpTjxUOfOj4z6NsEer3emCi8QXq9STOPV9seE2sSuzSuiYRNntm6k++L8vclSKGaTSThh2UFLdqN2bjm1TQt8r2liT63CqS3hZR6Tb8hwmVD8xOZ9SpSO7liScNngq1KjN0THQnPqS8lgogSmFlLR5rYTB2I+/1wYbtRV78qWK6d1/bKRTHRtSxfxxNDfjnH1yGz3f8nQaKbft9ezdu5G+WduI9oHvSIAESIAESGDnBKxz9o8iVsVpWVvFskSiR/KZilglywKds9vfUffBmOwQ6P62GxFqa8fWzKOiuVQTRQqZ4yTk1JQlYYFw4TQ38rJWweJDm1pAzZf0v+voV0u2AufNjwgLazDREWWYqOz65eRhaGqoZFJPCDOZL0mVu9Enqest/Ct2MPnc7A6pnO3Kj2WYqP0UUn+dwbIE7rcsViXMG63AmhIbk3ELBI24WKX9haTcWRtrWFErPSR6pt747DeblIhId+4sbf+pqFcnBGKAfL4P/xrfoJGv44tVTfib1ofx/Vvdfc1mq/tdgljVyC+y94fgvqZtPBNL01IPv2Eg9znvr2/TgR9ubTxJeNPRjHE/3j//xvNRJ0q3ycgqqaQJ7cLc75Pb4aY6atmXk+te2qJ2Suxz9I3gc+WzlpFLpcIHEu53++B154hVDqyN9VWUZYlbehClqv2ynvNuwyPDiYe5wJJvUFFxQ0LEfQFse2KVCfXLpG1upgeTKD5bQSWYVNh+BE8nVP6CHArzZayuyy+YKZMocjgXp4MLJrx4ECW9ws/rn1PQ9KnRzdAUVhOcxfmCDlnv78uYXFMRYanZcyTc6IL22JwTzg0o+Mp5sfUNPFqPLq/WZKv11d4/nTMrYBiGQuu1zUlPLevam9NA9bJBucRxrBPdFhufevV6YpV/jkjr9M1692wjUjffkAAJkAAJkEBbCFjnLBBpXOflOvofrzjR8CJWJcxv7O+oK1ZBRJdsydZhhRzf4dmsYbU8j8LDHIbVfELnbjLtCOeb8nDTy736w0yw1Mqfi0g9bh1bIdxYU/PicDfA9D/sQzn74C18iFqnJuEQzIHccsLa8pOy/vxsl/vktuBteC1ilREVXXtVy7C8iI5Wxao64zaprg/5LmHczBxaHGUrcATLZ3dGfV/YpNh/0rzf5pqLXPs763LbjjbjlHD/Cs5or1HnmlRLyxr6m9Ye9DJezx/KWDa+7xucTr9I8OFsncnHmWiv9KzNy+b5KtG64+8Mg21GVsk14FabdH51T39ZQvFhHrmsWt5scgOHS8pbFKuc89XTLmK+nXOM+3LnvpwZL5UPK/abY/OPtfKb47at3a87UqwKoPy6jEm1baYYft2LxL+gzPsgeiqoUL2wk5qJsvm0Tp3GuOQGH1YQNaY6os36KhbnJnUIeY8SrlQOq8fOullV3cYalucL+mLRIorOYbUY5AyIniceoRO2yL+w6rQpiBZrdDMMBRyd9+BhHoX5RSyvrpiw9eAm2ew5/HFxW+0t33O/cl6bm1eDNttJp+Ss0uW9ra5ly2v9dy6afLC2uoji1Cj65YZ1ox+FH6MRU8k3ZaeRdWxIlWhlHGM3tHr1ejdg/xxOy7y1182OW6QGviEBEiABEiCBPSZg5g+pvkmdS1TnAFWRHz9WUIs+2wvndamEuYL9HY06rHYpiyyLsBEqkQec1UWTS9QmTh/+Jo/8XAmLD9VyiiYiq6wYkVJ5LB/mE/8VXzYIr69Lu4ZFlYYglTL5NaV/Sc6aW0fDcpa18JOywZzPVtS2PrkN7dzXIlZJnlFjs8tYXYsZrJ2bRe1I91zYB+NpxyZ47/JZRqtildmVMO7buLW2/voN2qTwajTvb5AnqfW+tucIveJGrr/EU1g78K/JRv6mZXP3QfL9R92XFr3NqaKnNueMCBy2zmS/yPMxPF8lWnf83Zb+nj3E93n890HN/vklUERt9KVyCD4solRewdoLlU/OuSb844IKm3zhaRcx365ONXX7ocpH2uRxDuoz49U7pPqWPObb+80JTtC2F/terDK7mtRLILmBxbH4U4NoniHFzr+g7EUdC5FTZc0TtK0EsB2JVZHhrGH1WzW5iefPcoupnRd6ndBc32ijS/3cIwETli7rcusZsTz9S5jMSXWyNM7PiwVfWGr2HCt6yWVkAijnsiGJ0QmkfBn+1Rwa3MBN7qtwHa7mlBQOHFZZ/1VtFQWVS0K2w/ZDWusd2eDm7Y+j6Y+MVbTC2BjXqzdy0/KXgUbr3DXbiFbLdyRAAiRAAiTQRgJ2HpfopPuntXOSpLmC/R315xob5Ukd+aTyWJqlUdHlKuazNEafRQUlESRcB04+izhwNi9mysth6rc88X1tWT/5z3sP16SsnM+0wQoWflSYKry5AZ33RWslDcpJpJlEfddhhp30SRr/Fv+VcYnYQb3+6nlccv7SfjdaShJrJ6UBscvcgvQWCeNm5pyhI65z5jp5zNzm6SVJcSXYFNmvNglr12K7boc66LVKlh4RTPy227F17zt+Eb17qetv2us6cRlg/OCET3zfWu0saFK4BEv9IkeZsQhyK3u+SqRowput/D05JNGvSvqd8M4vKXjCDclsjf5yQe84Oa/7txXtova9WmaYsETdrVCpE5GUPtEvm/PljL6RHKgTrW+/vdv3YpW5udcRcvzkY/am1PvQhhha2htLkzoxZngR20mOm+gtUjaNYEc/ewPwf1xMu8IbvAxs9CLxRJuNNZTnJjFvk1nKMSofgNrVzZyjhpX5PAp+wipvHXD0PAD0k79eTC75T2gqKH6dQpAUVCLHfPW9mciqOixkmVs42fP6HXTUF8TqJ+JTF6/ejSShnU51dieTZIGttmR2V4js9qA53QkSioZ11bBcLKJk95zeqJZRnJoPE+rZgnrNcDDhtYlY73vJ8NWmAE+HkblXMMfX41YvssrfcVCd2yrxEdu29cZ+aPwb6V7YRgiSr0iABEiABEigzQTaK1bph5wq8j2bQ+6vzkMq2yvjOEUFrGBzlTqRVdHfajM3UwmHTYqGEFdlYQa5h4X6UQ02IW9SolwECdrTdk4pUWLeTsbK+dHJ4LcuJxveBBu82LlHOOeTtu+gT1LFW/y3JbFKllTKw1HNRSViVzt3Ow/pRayKzRslgXQKjcbN2HHoy4hIm/Yf5tvowsQcs6pt+9Ymxf7j8/6NlwXkpvIo/hAVnPelCdrk2snJyMUuHF+5SX9TbSbhbyCm+7+6iML8MpKC/kI+CWKVbFCR4Bep3ExpNzDD91XCihNfGVtN9vfcA3z/2H8flPXOn1xONj8Ir5FoFFNQW+RFS9rFeknvqhiIeEFNFcxnM7j7rdU0dsGX0/f823bzr+A8ANaXUZwrYbXRph5u+T1+ve/FKjhbSU6q3E1rKjFhBStqW8fbKveTk7NKdiFQy+oehttOZv5x1+yUN6+TNgWRVpmhQWTuzWBxdU0/XaqUZ3A3nUJakkiqwagjNGxLrFJRW6rNtyexWLEJFitlzNxTCdUlsaLdTSU9iOKPpl21tRXMj6kfqPBiiV9UctxdzJQr5mlZZRmF7HUv83+zQlKSJdqos6EClm37136cx2jmDu6oyVwgLLVwjmoJg4r5vRmUdZ1rWH0+g7uZQQxGdlZMao8khxxGQYX/238qn1Zu0Kwzjmy3rKtwOD1fxZpKzLq2isVZtY2pI/bZHVHuTCyiYpO3in30urv/SLmxomWyhpV5s3vlXS9Jvy94qub446jf9w1iMOOOo7WRiK2ro+143JvRyyCCm4x3A1ZPU/SuQ2m3zt22jeTx4ackQAIkQAIksPsE2i1WiZgTz0Gl+lJ7bh6oqblLSc89FlEcV3mrjJgQPhxVfryJ0krdHsbkwwLKNs+qbFeeutEPs414GaWH/biuxIjbM1jxnz06EM3O1Sb/6aBagqiXdeTQf8O0NzKPtbsJ65QT0/NYLMv29J4IF5S7g+HHag4t25qb9gQRB3XFqnAL9u30yeneW/myJbFK/IWUymel7GYSo19fR+/9fv1wO8hDJWJVXz/6/3oHww+98XXtKGHc9JzT8S2gVkqore3V8tb7M5h/Xkbp8bDJfeQKDQkjtF9tUkX8KT8jlc5Yuy5jcS5n8zm5PmRCp/bRR8L3TraA0kvlv6zp/EqF4QzSqTTuzq06rW3G3zSBDndSKdwJfJgaKi+LZolzguDknCDwpd17nf4+8IvmsaJ99jWsPpvUvCNiW8xXidbuvzO22j6xymyscQeTz6xvWKugPHvX3tND/7sZsao17UJ2rLyD0bll43MGfr8jQO6CL4f1RYxan1u0j7XVRatDxHdw9MfgTb3f/2KVIlNbQVFfjG5CwjR6BvMoi/4kBH9dRXFYEqKpm20ey+u++ivvK6h8b29YanKg80LNO7uo7LZYZdTL/P3r5smIPmcK17/2Eiv+uor5cXXzCfub/ssgCo7674scpvs1LMtEp17dO4msUg9PVueRy5gfMv10R+dwWoVe574dsSqpztujWKxUvDplgKN/zc0r5KTbpBIpZmcw/9I3Djk2zklNrPKRLZiB2lI+mPjpevUOlKXYTo61HwoY/IvDJN2DQTe5q50gNC1WqR8IWTttxzGWfNN2ZePHQtDGILQz8Qcg3ueY3e3QNoQu/5IACZAACZBAewm0X6wyEetqfhGmEgj7tIGVx1ZY0r/TaWTGF1FZmtHzu6gDFy3rLpGp/VDEsDunSpm57XITT7jXyvno3EO1Q80/Hpaxthm2VL+qlJD72p17ptEzXIzOd1XBpHKDBay4wSd2ThM+oIyeayd9itb0dr1rTaxSY7HozLeVfZVQkaWWwbKm8DpYXS1i2IqVRmxS/o/DMGHc4mKVWh5aQembu0Y0lTnoX4ZRXG2gntrT7FebVCxnEnyv+Sb65BB84y+T+Cr/MB9zhpv0N5N8sCRfOLHn4kvHfa2Yr+j7Raq+RF8l8UT6w3aLVerBftSHdu/pLYpVqsWtaBeoYeXxIEwea+PT+n6/IeO3MUFDaODL6TrWlxHTIbRWYs6wH//fGWKVkJP19SrSZat7piq7VRm/3mbLy3E7+furjazydwJ069yQ7W1bbJhwalS3e55tvDZr12tosWUNz5Y5k00AACAASURBVNRwPXzDI7f7pc3XsIWhBH31J3/eaYNy3uetvPVFyGaZbGzRh6ANe2Abwbn4ggRIgARIgATeZgIt/aaqOUedWZPMCet83RChHFuvbvdgKbvVeZot59btv96NOvw6/x3f/1rDxhbzTxeLnjfuxvxfbLsZu3IboF7L2DdzrJTdC5sUv2o3+Ph93sP34m/UvZ+4bRG+W/VZym01Dm7dW7yWdu5ilVuccRe+3m0bCa6jJrQLNOeXBvkGtxrTRjh2u5+NzrXD7zpLrNphZ3k4Cex3Ar5Ytd/by/aRAAmQAAmQAAmQAAmQAAmQAAmQwG4ToFi120RZHwnsgADFqh3A46EkQAIkQAIkQAIkQAIkQAIkQAJvBQGKVW/FMLITbwuByvd51NuO+m3pI/tBAiRAAiRAAiRAAiRAAiRAAiRAAo0IUKxqRIffkQAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJ7CkBilV7ipsnIwESIAESIAESIAESIAESIAESIAESIAESaESAYlUjOvyOBEiABEiABEiABEiABEiABEiABEiABEhgTwlQrNpT3DwZCZAACZAACZAACZAACZAACZAACZAACZBAIwIUqxrR4XckQAIkQAIkQAIkQAIkQAIkQAIkQAIkQAJ7SoBi1Z7i5slIgARIgARIgARIgARIgARIgARIgARIgAQaEaBY1YgOvyMBEiABEiABEiABEiABEiABEiABEiABEthTAhSr9hQ3T0YCJEACJEACJEACJEACJEACJEACJEACJNCIAMWqRnT4HQmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQwJ4SaEqs+u2338B/ZEAboA3QBmgDtAHaAG2ANkAboA3QBmgDtAHaAG2ANtBOG1CqWFNi1avVVfAfGdAGaAO0AdoAbYA2QBugDdAGaAO0AdoAbYA2QBugDbTTBpoWq/Y01osnIwESIAESIAESIAESIAESIAESIAESIAES+Lcl0FRk1b8tHXacBEiABEiABEiABEiABEiABEiABEiABEhgTwlQrNpT3DwZCZAACZAACZAACZAACZAACZAACZAACZBAIwIUqxrR4XckQAIkQAIkQAIkQAIkQAIkQAIkQAIkQAJ7SoBi1Z7i5slIgARIgARIgARIgARIgARIgARIgARIgAQaEaBY1YgOvyMBEiABEiABEiABEiABEiABEiABEiABEthTAhSr9hQ3T0YCJEACJEACJEACJEACJEACJEACJEACJNCIAMWqRnT4HQmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQwJ4SoFi1p7h5MhIgARIgARIgARIgARIgARIgARIgARIggUYEKFY1osPvSIAESIAESIAESIAESIAESIAESKBNBH777TfM5AsYHB7flX+qLlUn/yOBTidAsarTR5DtJwESIAESIAESIAESIAESIAES6EgC+cLcrohUrtil6uR/JNDpBChWdfoIsv0kQAIkQAIkQAIkQAIkQAIkQAIdSUBEpumHBZS+f76jf6oOqa8jYbDRJOAQoFjlwOBLEiABEiABEiABEiABEiABEiABEtgrAiIuKaFqp/+pOqS+ndbF40ngTROgWPWmR4DnJwESIAESIAESIAESIAESIAES+LckIOISxap/y+FnpxsQoFjVAA6/IgESIAESIAESIAESIAESIAESIIF2EaBY1S6yrLfTCVCs6vQRZPtJgARIgARIgARIgARIgARIgAQ6kgDFqo4cNjZ6DwhQrNoDyDwFCZAACZAACZAACZAACZAACZAACfgEKFb5RPieBAwBilW0BBIgARIgARIgARIgARIgARIgARJ4AwQoVr0B6DxlRxDoLLFqs4bV1xXz75fN3QX8S9XWXcX6VlXXfg7asV7b3WawNhLoPAKbWC0U0J15ihKvh84bPrfF7j22+pv7DV+TAAmQwL8FgVq1iqr6t83fs+D4JitoqfxGzbStWkVtY+vhCOpe37osWqy7iRpZpB0E3HHaVv2hDTVnoq2Ub6UsQJsLB5BiVciCr0jAJdAhYtUmViemcKh7AO+cCf990PsMq1sJS88e4ZA95oOBV27fzevN1xi8mcW7Tr3vdI+ge/Z1Ytnx3pFo2TMDaKod8dr4yXYIVJVY2YSgWK9uccZ33RF/hWspZZsjuLZU7+Rv+vPfsK7E3so2Z+B1m/8C3fbaPDqScN3UPW6Xvlgq4AN1/aYK2G/ox2/uoU3shEPtFQZ7R/Ced4999/MJXHtajQ5UfkrfhxPvp9GSfEcCJEACHUOgtvAVzh7rwuHDh+2/Lpy4OIblJkQh3cmfirj1h6PO8YfRdeIysi/q/OYmlD/6n5cx9jIJWQ3Phy/jRJe07TAOd/0eZ/++AO8OrQ9eHr+Mkx86ZQ934fdnvsJCUuGNZYxdOYmjQb8b153UOn62FwRqWPj7Wfw+YgMncHl8uemTV7+9hdOeXZy4mMXzOmJmvPxRnLySfE3Ey6rrp17dVRR7T8ds7tz956hztZg+vsqH19gfskjw6ppmsd8KUqzabyPC9uwXAh0hVq3np/C+cka7R3Bq4CkGcwV8csGIVu/ffIY691hg80crIJiycefqZ4zfHNKO17upKVzLfYe+gQkrig3hVP5nZ5xqmOmVstPonf4Og9Nz6NYCxQDe712s3w6nFr7cGYEdO//i0N98trOGxI7uBLHqGU61RdTZxOrsI3RnnuBJ0kQ4xmqXP5AxpVi1PdGu+gynPjP3SHWPPd47je7eaRy/OIQDWsQfwrEHP4aDRrEqZMFXJEACbweBhVv4WIk1XSdw7u9jyOfHAqe4649jWzvFP03hghYRjuL0jSym8nmM/f2cFZc+xq0FD9N6ET3HlZh0FKev9GMsP4b+K9Z577qAqZ+i5ZeHz6LrsBK/zuHW8BSmhm/h3AkjrH18J1r5qwem7OHj5/DVeB75/BSy/3tSH3/4P/sRlTZeYeyPtp5Pv8JYPo/8/2Vx9b/MZyfvRUtHW8V3e0lg4c7HWghVNqDHdVyEpy6cfbC1bFOduGBs4MPT6Bme0jb+1WcnzGfHbyFqRUBtrsdcEx+exuX7Y8iP9+OyFWO7/mcqIpK2VncNxRumL0f/cMvY3PhX1p67cHY4yeaqeD58wV6jXabNFKvqml/p++cQ8atuIX5BAh1CYP+LVZsv0H3ORieUnTCqzSVc+lx9nsWlxSTamygNjGhn68jFkeRIgLlpI4KlCig5VUOisT5/hJJU/XoOx5Tj9uk0Ztyyqn2fqnaMoe8NBJVI8/5d/lKs2slIt0us2kmbduFYilUG4rY41ALB/v3rT71IVSVCyoOCMfT9y44VxapdMFpWQQIksH8IVDH2qRKOPsLVb924jmVktZDThat59/N4yxd6P9JCwukhz9F+8RVOKhHs07GIc2+Eh7hjvjx0WtfzkStAiRB2vAdF9+nsehFXP1LtvoCp4EHRAm6pz7ouI++WRQ35K0qA6kLPU6f9C7fwkRLBruSjES3reVxW4ltXT0zEcI7my70i8NMYzik7+uhq1AZeZnFWj9NVNDZRaxeHTyPrRe497zupbe7ceGBEABZwS4mpXWe98svI/sFcK6EA22Ldr8ZwVvXlD9mocCr2nGRzcz3adnWk1vdZnLbHby3R7dUA7fw8Ii4poWmn/1Gs2ilBHr+fCOx/sWpuGr9TItGXT2ORS+sTOS1C/e5vL+JMl57gSPcADlx8gqU6ztXM30yk1PEJfxJSw+CXSoAaQvecrbquI/gz+q5aMa2VNUibVTwZmcLRcyZ64cCfhnD05hM8ee0oYStzOP5ZFgczfv9e4Lz6/LNZzLg9f/kdui8N6aU8B/6UxfGBMtb9Ovz3wfHxOmcy6hw59C29xnjvGN5XPKVeAOvfFXD8wpBeFvneuTGcL7g/dKriTaw+fYJTtk3vdKs+FjCz4vQRr9B31falWsa16yP6PKaskwPJtlu1QS0Ffe+cbduKdCB+roOXptDrLGHS/bG8Vf0HfbZqKVQmhw+0+DiA9z7P4XzuZczuoMZuYCIo9/6laYyvSBRfC8sA3bEoP9Wc1HLUdz8dw6mRclQ4eDKr23v8wZI+96E/ucve4n33OS89yOHgZ2asgr5fnTPL5oJ2LGJ1dhpHdf+nMC5oX36H81/YcTkzgPcv5HB+OjpFCGxFxiNo7ytTp+WumF6K2QmA12X03hzDQdUvdY5LU7hWSFB/X3s2opYCl7e7DPA3LE3P4pi1YWXbx1TeLdeMAzYvjL1/bq5XM+abwKa5NnS7u4fwwfUnkbxdgbharmImY64hxf/QF7MYj1wHFvZec5D72qdTGP9FBtz9u4mZ2+Y+eeieneEm3k/jNvjBF7MYfBnmvVoaUTaYxbGRqO2o+8RM34j+rjsv5eP1+TbttjL22ueYYLPwrvd3Px3R98xgafmjaRxS94i/xZ+GrOYmdHuPDjkRZ7FG8AMSIIGOILBind9Y1JHy2Y2Yc9gXcyIde2Ud+MsJgoF8dxZjwa2viB4lMPyHH+UEqPvSgopuevoqEI9eDRsB6/KsP1cFqi9U5FQez4NIrCqee8dLU6WeHpnXqi9+eq6PX1jx65Z296AoFfDvGyMgY3fyvieGKhO1QmmSfQQNFhtPsmP5zo0gnOvR0UsnEiLraisLUZuR45ut++lXOPEfJ3DOF3YhNudeK7YHT/vR862dnMn5GFkVDK//gmKVT4TvO5nAvherlgZMVNSRbPArH/IWR0sJUuGnACrou2IFJBWNlehchcu2esXBduqIn/elieTqzmHQdWarT3FcCSjnZvHEOb7xy1fovWic8nc/z+GT3ml8ciVrltycm8KMOI3Sv9iSNRshc8YRFMoFLc5pIUfV+aVxjI/cnDBLg6SOFuoUR/vYlSG8+6kSh4wQps5x7G/T+nzvK2dOBKBIlNsmStkxk9+rO4tjN6fxyRc2N1j3CK4FUXIyDmM4dkUJNeo8WSNYnRnAgStzWFUwtxCrlrImiu4de65uJXxoYWsIpwpmEtZQrHKWQr13cQLdvRM4IsJJZKlpuHRUC2pfTuCYEjDOjeCIXkrVglglY3F1CqfOGXFMxk0LNu55rQ0fTI1o8VaLdV8ou2+Oc0OxStqRGtG2osfg3LQWQtcLElkzhCNfquVhE/jACobuNSm2EuTrkmvuirJDKwwGdqJEYEewLD/BUVvnoS8m0H0zF+SnOzLgiAFqjHSUpRH0jqtyfxrA+xdHcLDl5Y0/Y7zXXHMH/mSWvgXL3s5NIXjAKGwujeGY9MOKme+cm8L5m0NQQnPEZq9/FwicwuWTmyM4oMRasRfV3sh1AOANcFh9MKbFwYN3o3fQhvcvGdsgB6Brg8ZOwms9Gy6nFpb+ck2JTu2ewKC+97n1Nbp31Gmlw1GJo+41FdjTphLnzT344BV1vU/juL+ke/MZTim77J7CuGOu6vel95I6tl5Ub5128WMSIIH9SeDbq/FopqClRVzVES3xZVJBESyj/z9VtMnVBGFHRTSp7w4jEIn+2Y8Thw/DRE/V8GphDP29t3Dr/hgWYlNdOd5GTylxafgr3Or9Ctn8ciRaK2xP0iuJiDmJ/n8mfe999rKBgOcV5dv2Eyh+6UczOedsaL+23D/7TYTflwnSYy2Py8rGD4fC5PK9EzrSUEdPKQF1vB+3em+hf3wBr/wcbi3W7bTce2kjtD5qdK0pf4CRVR642FuKVTEk/KCDCex7sco4ewM4lU+ibEWbz5zleiriZyKnhZ9D4oDFnCtVV4Lg455CxAGpQ30nTlCQ18U67t1juPbMfyrlVua9fjGnozkOuWKEii6wEQxBpJc4dyI0BdX4bRfBZwifTDtKmuPcvyN1NF0nIOwjecG+m7UJ64fQ/SSMgijdzWqn94N+Ky78aw7HlKPnOv6q/YuPjHh26YkRoSBtH8AHErmhylWf4RMtCkTFH3H+A1FEM3mJ3qsjOPi5IzKoz58kROUl9n8TEmUXOLPq+M0qBv+sokpUX623KnWmHjkRNOGS05YSrEtbzgzgeC5p3BxRx9qjFjgWhTuApjmrDlm78cWCoB1DODXrtAMmGujgp2OOuKjO+QRHldjiXHexcZH2fjqBwSBacBOlfiM+HxB7DIRlL0dc7SUuafFgDL16+Vl4fUTsROWls8JvSwnW7RLgAxfdJcCbWJ2e0GLg727baEZh4wpLm1X06chLJaY+wZIIGSJcn5kIxA25ht6pYy8Hrj6114EI7HvLYabXCDafzEon9AXV+H/+/VRsMDLWzrWuhE9dvYg80WtartMDIvJJfVveO5KaKRwHELmWg3uhXa5dmNK/EcE466oq6LuexQdfFPBE385DmzuVd/iI/bvLxJOaws9IgAQ6goBErZwejilFZjmUXmoXOvLxTomg5C8jVJFLksvKEav0kqbDOD1UtMsMjZhlErsfRTTJtESb9GBKcghpYcEc03XiKvJBVJXfsmVMKRGs96rNB/QxLkzEI3OCo/45pQWJW1/aXFvHL2DKWzIWlOWLPSQgNnAa2YSH60H0340EIUpaKYKUv4xQTbcll5UjVhVvKPs6jey3dpmhY3OHPzyHfnfTgBbrlib5f5fvmeWIydehU5pilQMj+SXFqmQu/LQzCbwdYpUbYSQiR+AgoU5klS/4eAMoznbgVNulSn+2EVDKWbf/Dv3ZW773agmDOgG7SsIe/nuSNA9yTiuOfJAIXhxltw26vNf2lSc4otqS4DxJ5MT2xaosLkVykdtzO0KFbpLHS84bCG9BP2WJpeT4ErFK3gcFA7HMFSpjokhYPP4qaXlYIlObd8zvk6rx2SMdtSNLTQPnfsoTJyU6xN0NcCs7kLYknDe2xNXylXZIZ5vnrDuTnGBd2iFRbFJ5vb+bz/CJtv0wsi82LnXaCzmXCGaSC67BMt9jDypK+bBRLhJ94zQuEBDD3QBXS+F1F16DS1YYghUnfdtWdS7hkoq4Ubnp9Fu7xNBrn0ReRu1bbDkUY0SsipZTQqjkusuZaKI3xEHa515jmqxdxqmXyuolx85yZDu2cp8SGwyWCQZDU8PgdXOfPFUwH66OmEguOVYvAdQifSgIS30xZvDvHcGJwhfCMeleaG1C34etWKXFSu9SDisLBe9QXAWkD0dHlF3yPxIggU4n0FisEqGgkVhllguaBO0n0aMinqpVvFrI4vKJLnR1mWTlQWSVjYRRn3f9sR8LP5mbUHUhiws66fpH6JmTG9Mysv+thANVz8e4+n/LqKnIltor5G/apOl1l0MV0ROIDL/H6Yv9yDeah1oRTYtm/+80Lt/Lx6NoOn2wO7L9YoN1xCoRbxqJVcpEJUH7f/Ug/7KKavUVFvQOk8q2lI2FNm4iuYztnr23gKoyx40qFiTJ+Uc9KIqJtlh30hAsT5jk6V1/9PJYJRWW/ta1+6SD9v9nzFm1/8eILXwzBPa9WCXiQMyZUrzEaQ6cfVmiFS790lg958qgts6vK3S5YzA7EV0eI0/mu7P4ROUxUjfp2s8oqdwlfgSRPZ+IWfI30odaBU+mCzjfO4GjyhmUpUVnBhA4cuLYbyVWyfli5UKxZftiVeh4Gzx1onO8Nsi4ybI+1+k1eaekXnHwQ+FDhiHJkY6JIlIYv0EJFNcy0zh+ySxZVPmfNHsRRlTZJKYi9skyL3HO3SWOmq0s35S2BydXxmDznDnfCRNph/0b2EFSW6RKEdpkTG1dgW3Ycs1zVgfUGbtG7cAm1ssv0HtPLeNUeYXCpaDvONdObFzqtDfgL2NiRYMgj5bL3l4Tus8yRnKccFJ/f3mK44qt853Yjlx75q/YmIyj5D4zS0+NjUr/bNk6bESsCsZSt0dsObSBGJeg3ZsY/7OyT1v2jXAQ0W4AsciqJNutY4vCOsrCdFQ4BXYrYlIQWWlFO2fjitZsOgBqXghHaav3dfDWjcizedg+uTeHmbIz+9aF/fZJ5JZE/AU18gUJkECHEmgsVkny6NCRr9fN5YnLdvc/E/WkBKYTX+axYJOmB2KViEKxJOhG9FIJz8McWSJUHEZ8Zz5JDH+i7tK+WlWJElUsz/VbIexj9Dz173O2Rxs1XbZaXUbxvt157XgPFuoUr8eBn+82AbGBOmKV5FXbQqwCljF10e7+JyJm1wlcnV2wOddCGzeRVQmJ95WJJubIar5un071W7PrYNcJbwMBv6C8p1glJOr+ZWRVXTT8ogMJ7HuxKubsuJDFobc5qyS/zntX51B6XcGq/MsZ4emDu2X9mRaaguVnoWPpVi1RTpKXR9oRjx4ASvfMErjgu9rP4bmlDa8rRuBSJ1l6imNB7p0sjqpcQPfm0Nfr7VpYx1EORAcRC8SxTHLQhJF812ydkGWAPp86gofXBnFgD32h8hwl/XuEcZ0/Wxx8ERLCUZA6XCc42flXSdpNEmgtelxSOWgeoXd82iw5dESMQCwRHno8bPTMpyM6f1hie8dVLLy01WeiKvHEB/XRlnZgz+u2Rbov43T9O/OJ5Rs4/bacMNqaszqgztjJuWLtqEHl+TqghCAl5F1Q+dVmcW36kdkZU+wvyVbqtDfgL2MidvN5ro6dTOPSo0ooMspxwkn9FdHa/a7qXP/BNfizPUrGcQhH/pxkm+qzObMTaB02cj9wbTPJPpLtVTc6Kla9EQ6ARDH5EXsuXvj3EG9sxQajLEwNwim0W4m2smKP3UDDPb/U15xNR1oaRNEG4rz3deTtZhWl3CMtbsvGDUrUPNj7LMg5psqb+7uN/BKxrdkoxMgJ+YYESGBfEmiU8ydY4rRFHh3pmCRIz+chScsXbqrIKmcXNhGrkhJSw0ZDBVEjIlQkC1IitAVCmLQj6a+IGv8z1VSuKxElLky46QGSKuZn7SbQKGdVbfZyg5xr8ZZJgvR8fgGvtBC5YBL+/3cY1SRiVWLSdlnGmrBstpm63RbVZGmrv9OlW8h/TbHKJxJ7T7EqhoQfdDCBfS9WocFugLIcQ5IDi2MUjaYIl+vJ5+JUSZ6i+HITya0SLhMSB0qOjYy5OJoxZz9SKngj5z02Et3tTNofOHbiKP85sg4vdM5FLGgQdSLLyQLnrdk6kwQI3YM6gofHQMS+WMRGQEFeiHCwA7HK2siBL+aiO+hJX10RQz5zx0rEjgsFI1BI0xL+GjsIlyyFRezyMYmUCb+o/0ra4rZPStvIvgO9dicyTyCQYs1zVkfUGTtph8tEFRfH/PNpzETmqrYesb8kW6nT3phYJUKIb+PSweCvPacTgRN+ZZZqupFVwXeJLxxhsZxYIPywDhu5VqP3A7HlUMwM7xtOziNdu9iLXf76Rjg4+cc+nQiTyoe9Ny21m1wE9yVvbMUGzXLN6MFyr3PvA+tTE1oAVcvozPfhfVYdLfW5x0RrbfBOOAaRW07ZX6rOwwrnc/1SRRA+xSf6IUK0PSrP3qEzA1D5rUTci/9m+PXxPQmQQMcQEOe3wW6AXf+bD3bnS+rXqwW1K584/26JhKTR1SlcUJEtgSDllE/4buHOR3oZYJIgZb5z8mHZ3f3yLyI/2uYE0k/nvPHdBMO2iBC2ZQ6h8BC+ahMBGYv6uwF24Wq+QQjcRnyXyaCpVsQ0Cf/NpyqPlVoOmjT2se9arFvOGxGqEsxVysX+JthxrEwHfsBlgB04aGzynhDY/2KVyu1iHYjz3zmJpWVZnrMj03p5MZIjKshV02d3vLr5RH8f5I6ySZbf+XzWJtQ1zNfzdgc0J++JOKe/+3P0qTvwM8Z1Eu4BBJFVWwydcWA9hwg/22VkzjJAEaEiOxC6ybxF4LE7FZ5pIsF603XuLLJKHLxIAmrFZfNH9N6cQHfmqU1QLg6+9CWEFzr6/mceO+s8i2gppSXRfkTEEPEhkoNIoj2UCOXYmErWn5/F8ZvTuFQwv6TiaB+IHA+sz5rE3MGyLmlEo7/SFpXA3d0dz1mi9InkxvIEgqBa60hvzVkdYQWfYNmsrUXa4YtV8rnXV5XHSznvu7IMELLL5hh6V1xBR+0IN4XjvbPo0wnlZYwGcHxCIqRU+8PrLzLOAaDkFzKO70sidSlWfYbzX06gu3/R5LcSBh4buR80K1bVtZfgHvNmOKjlqzO9JirxwMVHeBKZMG5i9ekjs8to9xj6dKL7hByA1gZ1zjx3CCXhfLDLn4Usu+xdyuG4WuoZMLDft2LTr19ifLrsiNRyL1T3CLcx8gDCCImlnFrWOobz7nUXREcOQHJsmRbZe9SnORxXuwD6/RHb4V8SIIEOJSDL6fwE6cs2AbqbQ0plpF5GMf88Ep20fN8kh/aX6i0Pn0XX4cOIigxyvo+d3FQKXQ3FGx9rkSBS3ooJsXw+L23y666rCHQKceQTEmlLW1zhTUSQj74semKc9H0LEaRDR7zjmv3TGM4pgdMfV7EBL4dU9Z9F5L93f9Blx0p/N0gZZ+9zOZ8f8bReRI/Oq+aWb7FuZekLt3BS5cnaThJ/sXFHdO248UxosIhVM/kCSt+/2NE/VYfUl3AqfkQCHUVg/4tVWjCw4lF3Fkd7C+gdmMZRu+34+72LkSUbifTrOfrK0b1pHbXPxtA9MIdrvWMmB9UZL+/VL4s4JUv3UhO4NPIdBkceIXG7+8RGhB+u5sxuhQcuTKFXJ2B/iktXszh4zrQliGAIdkoz29wfV9urXxzCgdSIWd7mRLaopTpHVO6sM5KHx+TeOXJzwpQNnG3JubJ1nUYsCqNETA/qROdYxkEEl8P23dQUruVUwus5dPvbwwdL65oTq2TJ5YELatnYE8yore5fPzU7D3ZncUqNy/R36LuX0/mV3vdyGeGX7/CJ5jSEozencX7KZhsVfsrGMnNhHaps9wh6l+z4bS7h0ueGs/SrN6NsZgRHdN98XuG4x16JEJIawRE5b66AU5ZRZKe6JmxY2pPMWZ1dHPkBHLo6je6hF+bakXYENiItLeO8tvkhHLv31AjBI9P44E9DeF/nkwrHLGYr9dor53KiyQJxuHsEpwbMUKEUdwAABrNJREFUeXp7R6Bzjp2bMmOsmhSIZAP44GYBfdNzOH8li3cv2uvBqVN6UPevIwgevDJrrkO1JMzeV2T5bxAJ5rFpTazK4oPUEN6/NK2vA2Mvyoai95g3wkEBclio5Z6HvlDLaCdw7PMhswT0zBCOPbC7fKrysbF17qMXJsy1PjIb3KMju/LpAQl32VP3q/B+J6MV1tfYpsPcY+4ywpCjvZbda0qEV3lQcW4M5+09Q42Lvl+4m3PYJkkUr2qvm2xdWsy/JEACHU5AnP7DR3H6Sj+yw1/h8h+OauGoK7Js7hXG/mhyUrmRKAic+C6c+Oyq3lXv8pnfa6Gq68RXWPDxeOcbG++HlD/sCwSOiNV14hxuDY8h23sBJz80idfPDrs7/IWC1+EPT+LCl2o3wFu4+pnNVdR1Fll3h7+g3Ydx9D8v4Gpk98DDiAlkfj/4fs8IiNh4+MPTuHw/i+zfL+O0tYHIUs1XYzirc1J9hFuO4QWRTF0ncE7bxWWcPaaWqHbhxN+dgrZH0fONYey+lD+Mj29Exc2W6l4Zw1md0P0wPvqjuVaUjbr/pv7ZAOtbKlblC3OBwCRC007/qjr5Hwl0OoGOEKtUPqDV2Wl8YMUYvZyvewhHMovOE/UGQxFzrpyym68xLo6xjhYZwIE/jeC8jaRxSgLVMnpvjuA9vx03n3gRCZGjEt78jJmMdcbtOQ+qPFuzU1psijhv1Rc4b8UL1e93U7OYeR1fhqVP8vI7XPpyDCZR9Bi6p1+FuX5cZ7vJOmMChD5Js2KVcoKrmLkn4p8ReJQzfKT3mTNurUVWKce676rsyKgikgze9SezEfs48FkOfeXktkbKurlnFp8GQpG2McX78wlc+86N5FHi2CLOK9HQjp0Ss84/UQ626uM2xKqbz7A6YRP1B/ZQiNpUQxtuhrM1w/JcIMjEdrxzbcQWh1tetU33tYxr2iZ3R6zS13fhUSBuCPv3Lk5j3Iu2Wi3M4sifrC3J9aCiLFXbWhGrVP9qL9F3M3odvtOdxfGBciiAi7jmsWlNrBrBte/KuKSichyb6Z6NLgN+YxwUi80qngxM4JB7b1PC98Up9D51n84miVX2Ws/498Ysjo+4XpEYFRAs7z5TJ1F5U/eOMCosugRxE76dKO6Hbj517jvmN8W1JVUmbnO2zbIkVkVBPnEjtpw+8SUJkEBHE6i9MLv36d3wtLN/FCevTMGVglT008JNFf3UhYhAoHr+UxG3rEBl6lDHZ/F8vQ6Wl1PosYKYKd+FExezeO7dcs3RVSzcPxdN4P7hadxK3N6viufDl62YFSZ7//2ZW8m7AVafI3vlJI7qPtvyXb/H2V7uBlhn5N7QxzU9ries0KNt5sOTuDwRtVDUFkz0U9cFTP0UbWr121tWoLLjrI4ffu5F1YXHLP9fjxXExC5O6PJJJtp03ZKzzbU373XSktegVW+pWPXbb79h+mEYEbVToUrVperkfyTQ6QQ6RKwSzL9h3SZLNknS5fNd+Bskw/aEicSqpR3NlE2swHwo56w2cTNR+VZ+2YaTJDm/PGdbN2C7dTboUvJXwquK9W10IbFOxS7GTc7TzLhsYr1Spz02t028fq8lulydOryiiW99IWSzZhLzb2ec9Qmk/020qeok/E9snPuhYqUSljdRr3vYdl5LYvSGDGx7Kg3yM7RybuH+uhm7aaXihLK6f02c501wkObac2/rHrvrLLey6U2s/1Lv/il228jWwzLbur8KM/4lARJ4awjIDnq1jfpdqjX6+bG76jU6PlJzzezY12x5075GDXBqX29j3c5p+HIvCciujY1soIaG9qTtotHx0f60bnPN1x09E9+RAAmQQJRAh4lV0cbznRCo4kl2Ckc+HcO1sqsGhctpopEHchz/vlECvlj1RhvDk5MACZAACZAACZAACZAACZAACZDA/iBAsWp/jMOOWxHmaRnCwUsT6L6ZwyG7XOrAhUfgypUdI979CihW7T5T1kgCJEACJEACJEACJEACJEACJNDxBChWdfwQhh1Yf/EU578YsTmrsjiokpAPLGKJ0bghpP306vUiLvVOo3u8Tl6f/dRWtoUESIAESIAESIAESIAESIAESIAE9ogAxao9As3TkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJbE2AYtXWjFiCBEiABEiABEiABEiABEiABEiABEiABEhgjwhQrNoj0DwNCZAACZAACZAACZAACZAACZAACZAACZDA1gQoVm3NiCVIgARIgARIgARIgARIgARIgARIgARIgAT2iECSWPX/AbJjv6CX0fB6AAAAAElFTkSuQmCC) # + [markdown] id="kB_XY3n6RpUq" # # Model 2 # epochs 10 -> 20 # + id="MX9SI0GMwI81" inp = Input(shape=(maxlen, )) #maxlen=50 x = Embedding(len(tokenizer.word_index), embedding_matrix.shape[1], weights=[embedding_matrix],trainable=False)(inp) x = SpatialDropout1D(0.2)(x) x = Bidirectional(GRU(128, return_sequences=True))(x) x = Conv1D(64, 3, padding='valid', kernel_initializer='glorot_uniform')(x) avg_pool = GlobalMaxPooling1D()(x) max_pool = GlobalAveragePooling1D()(x) conc = concatenate([avg_pool, max_pool]) x = Dense(128, activation="relu")(conc) x = Dropout(0.1)(x) x = Dense(6, activation="sigmoid")(x) # + colab={"base_uri": "https://localhost:8080/"} id="1haAny8mRynD" outputId="d14c5431-ff9d-44cf-c3f7-f62f46d3ec77" model = Model(inputs=inp, outputs=x) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() # + colab={"base_uri": "https://localhost:8080/"} id="1ie66ObqRzx0" outputId="527a2337-00f4-4991-b6cd-54f824a61a8a" tf.keras.backend.clear_session() from tensorflow.keras.callbacks import ModelCheckpoint #checkpoint checkpoint = ModelCheckpoint(filepath='/content/drive/Shareddrives/SOGANG Parrot/Trained-Model/0408-ver2-augumentated-pretrained-embed-Glove.hdf5', monitor='val_loss', verbose=1, save_best_only=True) batch_size = 128 epochs = 20 hist = model.fit(X_t,y, batch_size=batch_size, epochs=epochs, callbacks=[checkpoint], validation_split=0.1) # + id="5oZ3KGrqR3ei" sample_submission = pd.read_csv("/content/drive/Shareddrives/SOGANG Parrot/sample_submission.csv/sample_submission.csv") sample_submission[list_classes] = model.predict(X_te) sample_submission.to_csv("/content/drive/Shareddrives/SOGANG Parrot/All-submission/0408-ver2-augumentated-pretrained-embed-Glove.csv", index=False) # + [markdown] id="6p3kmCQweqiF" # ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABJ8AAABmCAYAAABlXzCzAAAgAElEQVR4Ae2dy28UV9r/82ewzTLLd5ltlhErlpG8smYxQsqG0W9BftKgV7zRiHhGQUkGtQJoXprLmBkTZjoOjLGttuNLbP9saNKhLYxNmqgHEzM4abXjdGLr+9O5PFWnTlX1xXbbjflGIu5L1bl8zlPV5/nWc57zGvgfCZAACZAACZAACZAACZAACZAACZAACZAACXSIwGsdKpfFkgAJkAAJkAAJkAAJkAAJkAAJkAAJkAAJkAAoPtEISIAESIAESIAESIAESIAESIAESIAESIAEOkaA4lPH0LJgEiABEiABEiABEiABEiABEiABEiABEiABik+0ARIgARIgARIgARIgARIgARIgARIgARIggY4RoPjUMbQsmARIgARIgARIgARIgARIgARIgARIgARIgOITbYAESIAESIAESIAESIAESIAESIAESIAESKBjBCg+dQwtCyYBEiABEiABEiABEiABEiABEiABEiABEqD4RBsgARIgARIgARIgARIgARIgARIgARIgARLoGAGKTx1Dy4JJgARIgARIgARIgARIgARIgARIgARIgAQoPtEGSIAESIAESIAESIAESIAESIAESIAESIAEOkaA4lPH0LJgEiABEiABEiABEiABEiABEiABEiABEiABik+0ARIgARIgARIgARIgARIgARIgARIgARIggY4RoPjUMbQsmARIgARIgARIgARIgARIgARIgARIgARIgOITbYAESIAESIAESIAESIAESIAESIAESIAESKBjBCg+dQwtCyYBEiABEiABEiABEiABEiABEiABEiABEqD4RBsgARIgARIgARIgARIgARIgARIgARIgARLoGAGKTx1Dy4JJgARIgARIgARIgARIgARIgARIgARIgAQoPtEGSIAESIAESIAESIAESIAESIAESIAESIAEOkaA4lPH0LJgEiABEiABEiABEiABEiABEiABEiABEiABik+0ARIgARIgARIgARIgARIgARIgARIgARIggY4RoPjUMbQsmARIgARIgARIgARIgARIgARIgARIgARIgOITbYAESIAESIAESIAESIAESIAESIAESIAESKBjBCg+dQwtCyYBEiABEiABEiABEiABEiABEiABEiABEqD4RBsgARIgARIgARIgARIgARIgARIgARIgARLoGAGKTx1Dy4JJgARIgARIgARIgARIgARIgARIgARIgAQoPtEGSIAESIAESIAESIAESIAESIAESIAESIAEOkaA4lPH0LJgEiABEiABEiABEiABEiABEiABEiABEiABik+0ARIgARIgARIgARIgARIgARIgARIgARIggY4RoPjUMbQsmARIgARIgARIgARIgARIgARIgARIgARIgOITbYAESIAESIAESIAESIAESIAESIAESIAESKBjBCg+dQwtCyYBEiABEiABEiABEiABEiABEiABEiABEqD4RBsgARIgARIgARIgARIgARIgARIgARIgARLoGIEuEJ+2UK/Xzb9fOtbPPS24/n0Z5f/U97RMVdhWtYJypYqtPS+ZBe4ngY6M4y9VVJ5UUH1JrpH95M26SIAESIAESCAgsF1F+esClr7f+3laUEenX/xi58X1OueEnWZ9yMqvf7+EwtdlVLf3uGO0yT0G2q3FbaH6pITCV9OYvlfCahfcRjviV+0K/8unXeyqu3t88gGKT3Ws3htBfzaDTCb8d/VOAaubO+nlKhZUOfOrOzm59XO2KpjW7V3A3tZURekLxWEIS7XWm/NyHdnaGK3Oh/YQsY3PxrCwtIYuuA82wL7bcTQ3tC1v0lBdvKWvk6GljQZ18ysSIAESIAESeFkJ2DmCMyc0c4As+m9PY6nFh371x2NmXnmtgLVOodgy4pD/W73b6rbWlzD9xdXIvDiT7cfIvdUun/vstucv5/kbS0PRsRLbvXYTY/cq2PDmcq310l4Hd5bQdMZXW8JQxPdZQ+GamUOPPd6b2TJtsrVRa/co4+s08iVb85narbfZ8Wv3byKTyaI/l0PuTgnrzU7Yxfdp/p74fgvPVOG79ataaOCzhRb9773WLlpo2yE85IDEpy2szt9EJpvD9OIqqjbyqfqshOlcFpnsNCptR3js30Vaf1pC6ene3NRdm9paX0Hp8fohfsrV2hiZm9EYipUKKsG/FZS+GsLVTAbZ2wWstW0fLunOvt7VONqJhLnhOu38ZR0riytY7+J+O63lSxIgARIgARJok4CdI/x9BLP3CijYf9P5m/q3P5O5iYVnLcSGq9/LrwsdmadJh0R0iP1WywE7+fu8gJtKSFBi01clrFQqKC8VMKbmxZkMbs6vHuL54U6AHfw5Yge3vgzttTA3gVt/M2OWHV1Cte1m2utgR+IToHyUwtd7NF+kTbY9eq2e0J3i0xoKf88g2+lADgsp2d8Lfb91G4yyK7+qlQFpSXzqhHbRSuMO3zEHIz5ZBztRld9cwVgmg/YjPFoTNg7fEL5MPWptjBrekKtLGMlmkP2qcjgnYWni08s0zGwrCZAACZAACbRNIN3p3no6iwElzHxR2oEz33ZDmp4gosPeiU91lL9UESsDmH3qC2z78OS/aY95QBKBdDuoYumOGs8sZiv+eCaV5H6Wfh24R+nXds7YmVUftMkY7z38oKGvo+tpzWfawyYBMHW274PvrBXNGeys3LbPakV86oh20XZLD8UJByM+6UHOIO1He0tFQkmEx0YZ04M5FL/3ea+hOJjD9LcSlBpepDpE9HY/spkMrqqlWn40UVDmFtaXps0TCh0ia8OaN1dRyOf0ksCrnw1h+rH73GID5akccl9Hg7k3KkVM3DFP57J/y2EsIUS6/p8VLNhyM9duYuirpUgky9rXOeSmyrEw23jZfiiv06bqCqZtOxL77mO072Nt+7KIiqDVx7RZh3ryOD+Gmyr8N9uPW7qv4RilNEN/3OxmZELqb6H0Y7QUl5Mat4mvfU4AtjdQ+XoCQ5+psHYVVjqGQlIUW920P6eeXmX7kcsvYMUN+Q9sqI61r20/7VMqfxyD97+sY+mrW9qutI3MR9fk6+NyYrc55Fz7/r6I3OA0ypExAdw+G7vz+9zmuEWR8h0JkAAJkAAJ7BMBO0dIjPioojSonPkJlHXguf1tGyxiTc/ZzPxLzyvt77PM0zaWJ/Tv6cRj7wcUwNo3Q/q7gl7eobpZx/rjBYzZ32LzW72C9SDY3cw99dxAzzHNb3VkjurOHzJXcTO/0EL+qQ0rVkygLPNfh3p93UQDSCSA/iphPqPmu0FT5fxWjnOY1Z8WMKbnSM6SoB31SRpweP+mi0/A1nfTOmJt4L71F/Q8zvVbLBdhH8z/w+ugWl9D0Ub+Zf92CxNfe8svE8QnPZdsZb4479p10hh1uU36dn17AsUnL0/e3Ga+jghBvrAYn/d7NqGG0r1ek3yYhOHe+HYaucHQj1U+SMQndctU97U7CUuhxVepVrFi/R2//W7VzRmYowM/yp4cvNfXh/HXE68PdbxvJzmjC0Tuk62IT+1oF7adrm8d/y2xB6E9X67+fdHen500PZE+qrGZQDHqxIeVdcGrgxGf6mVMZDMY+KrcfD10aiSIuTmH6qzcrEcwci2Hsa9XUKmUsTRvlmpFwpVFvfxyArl8UYc2r9yT4wqY/ewWZpfKqFRWULhjRIrwyYW9GTshiRvLI8hmrmLonqqzgsrjBdzyonO2vl/AzUwWuakSyuqYJyVMfBZ9iqcvQm/SVV2yZc8v6fPKSwsYUoLOZwtYDSYotk35CUz8bQgLtu3qB0sJLCPL8QmXa3um/RnczBew9ES1bQmzt5XoMoFykH+rjTp+WcWC6tu1sC2lqVu4OjrirU13WxG+bnozsnm33H6lcYqGPG9gZTSr21V4bCZyK3dvIZvJYvo758lUQvuLYzlk3ZB/a0NDd4Zw9YsJFJZWsLRkfgD8cdTv/zWLhdF+DHnjmM0VsGar1pPL5VnkMhmMfWPat1azXybcGNP6vJe2EY4KX5EACZAACZBAJwnIPC4p100VS/9S4pNMuO2cJDOEIRVhcu2mEZHUg0rfIZf33vwKMEtMMtlZmOAUO0dQotIXE3rp36z+7TdzLvNb3UR8qq5gTOfcuYpbX85CL8HS77MYWXIfZPocJcoki5FSgoDkH67mKXY5nnrQOO0s9YrMd93jdJ+mMWKXhF3Nr4RRZAGjIT1Pu/pZDjk1P1H17rhPfqMP3/tG4hOezpp8UOIvWOc19Fssj4C92L29DvS8MasforvjG5nXyrlSh4pd0XlT5TpRdWxh9a6aw5rrZOyrWUzYB/TR+aI/Pl1sk4FdZ9E/Mo3CvdCuI/bvd6mL3jf1dWwUkivetOJvQrH5LINsbgxF7eusIObDJHDYqq2hUinq1Ue5OevPfr9hVpmklund17SN5zB056Yel9LjFZS+Tb/vNWdgGqqPc+7f+v2/JjCRc/z9GeXPZXDz7lq4MmZrzfij2VuYXjR9WlEBA+o4EYVVFQk+VgxRO9qFuuqeKb8/g6t3Foxv/bhollBH/HegLV/uzhCGrt3CxL0lrCwt2WTwVSxZ39b4/6J9eGMT69DBfXAw4pP6Lft2wqxttxDLz6tITNxob6zxKKkU8SkTj4gxA3sTxf9Y0LbM7JflyBOi9W9yWqyJCBFYR1E9bQtu7Bvm6VTw3k6I5qLpx7e+L2F2MYxCWZ3LIPMvb+13rYzCvfDJg39x4ccSbqkliA+9C3djRYt34YVj26QmZZFDbdivX2/E3upYX17AxD0vl8AvZUxkMo5w1XodOjl2dsRrC1AtmaTZIctIQ4I3zW9GlrmMgeaUcJFVTSLGYHmnfT/7NKhK/yivLc6iFCjEW6jMZD1xTx2/hcpXWWT+bidjYkMJ6/n9cTT9SRhHu4Tw1qIzaGn27t8Y98U2XE58TQIkQAIkQAKdJNBAfLK/jcFvMGROYnIhRZ5iy7EyR0AdK3lXuLJ9eF7QS/myM3YZf3UFE3dyuOXlVlq7N6BFBHdumCw6KGc9i1huqu0qSnoJ1hhWggd6CRwDkUeJBLcwMVfCSsrc2CQF9vNAWSckM4DCc1O+HDfkzjNQR2VO9SmLiW8tOWGmHrJFosF32aeEbh6mj5LtQPVQ5YcxdhMwbld8UvP/yLjJ+GYx8cQbt8DWE8QnyduU6PRmEERmJQ1MV9okIEmxo4Ku2HVo/0ld6pbPmvs69n4YjG0r/ma6D7OqrnnxYVIh+L61OtCW+fdZJ+jBFLB2T+VvdgIVrI23KgAaBrMoy6737t8gwMLatC8+xZYob0H3MTMbbghWq6D45TRWHDdLtVz7+9mFcEMK38dK4dOydrFVwWzWvz8D+KWCaRV8I8JXu75ckm+tNqRK+vyh2gyhyW9OSj87/fGBiU+6Y5trWLkXJubLZK6a5VnuLML+ILYsPnmCkgG4ioVsBrmStT5bpjuRUMeZHxH3aYE9Wz1FCIzeF5/UkzL1fanhtqZrd5VwMYuK2zdTfPB/X7SolnLIZJ2w5+BIdeN1byK2TQl9r2rjSy7DKS7hZYro1rQOExqfmKxOqcaRXTkSqg2e2jRqc3QMNKfBpDwQSjDKICNt3ljBSOzH3GuDjaqKPZlSh+ndbWwkUooNqcP8cTQ312n7ZDVa3+p8Fhm37Wn27t0YD9Y2on3gOxIgARIgARLYPQHrbP2rhHVxQjbWsSqR4pF8oCI+yTI8p3b7O+o+6JId8NzfdiMqNXdUzTwqmos0UXSQOU5CTkpZghUIEU5zIy/rVZTv2aX8ar6k/13FkFoiFThjfsRWWIKJXqjARE2nHycPNzOjKybVgzCT+ZIUuRd9krIO4V+xg9knZvfDer2O6vMKTFR9Bpm/L2BVAuvbFp8S5o1WMM2Ijcm4BQJFXHzS/kJS7qmtDayplRgS3ZI2Pt1mkxKx6M6dpe0/lvTqgcC5l8+78K/xDRr5Or741IK/aX0Y37/V3ddsmt3vEsSnRn6RvT8E9zVt47lYWpQ0/IaB3Oe8v75NB364tfEkIU1HG8b9eL/+rScTThRti5FPUkgL2oW53ye3w00t1LYvJ9e9tEXtBDjo6BvB58pnrWA6kwkfMLjfHfDrgxWfnM5vba6jIkvKsiNYqdkv05zxWFI0c8Ek33CiYoWEZPuC1s7EJxNal8va3EZfzqL0eA3VYJJg+xE8PVDr/6dRXKpgfVN+kcwxiaKFc7E5uGDCeUewolfUef1zDjR9anRzMwerCUt5qahDxIcGcyZXU0QoarWOhBtX0B6bs8G5oQRfOS+a35Cj5ejj1ZpmtT7Z+6dzTgUMw9BjvTY46aliqr05DVQvGxyXOI4p0Wex8Ukr1xOf/DoirdM3372zjUjZfEMCJEACJEACHSFgna1AdHGdkasY+nrNiVYX8SlhfmN/R13xCSKi5FdsGVaY8R2Y7TrWK0so3pvGmJpP6NxHph2ucCWiQ2Qe+f1CsLTJn4tIOW4ZzRBubah5cbjbXfZf9iGbfZAWPhRNKUk4BHMg9zhhbfnJsf78bI/75LbgMLwWO5Ct4d2/2ZwXcdGu+JQybrPq+pDvEsbNzKHF8bWCRbBcdXfUu8Imxf6T5v02V1vk2t9dlzt2thmnhPtXUKO9Rp1rUi3lauhvWnvQy2Y9fyhn2UTuWUFd8iLBh7NlJp9norGyd21eM89XkVLT/hoGO4x8kmvALTypfnVPf7aC0r0CpvNqObHJrRsu4W5TfHLqS9MuYr6dc477cve+nBkvlU8q9ptj83e185vjtq2Tr7tGfAo6+csqZtU2j2LIqUbvXyDmfRDdFBSoXthJykzFfJpSpjEWuWGHBUSNI0WE2VxHeXFWh2z3KyFK5YD62ll3qorb2sDqUlEbvxZFdA6ocrDmPlpPPIImbJF/oaS0KYjmanRzCwUZnTfgXgHFpTJW19dMmHhw02u1Dn9c3FZ7y+Xcr5zX5mbUoM12Eik5n/Tx3tbMskWz/rsYTcZXXy+jNDeBIbkBXRtC8Xk0oin5Jus0MsWG1BHtjGPsBpVWrndD9etwWuatXW513CIl8A0JkAAJkAAJ7DMBM3/IDM7qXJw6h6aKzHheRT36rC6c12US5gr2dzTqgNqlI7IMwUaQRB5Y1somF6dNJD72VQGFxRWU76nlCy1EPllxIaPyQN4rJP4rPWsQ/p5Ku46yWvafyZj8lNK/JOfLLaPhcZa18JNjgzmfLahjfXIb+vK+FvFJ8nQam13F+kbMYO3cLGpHuufCPhhPOzbBe5fPKtoVn8yue3Hfxi21/dcHaJPCq9G8v0Geofb72pkz9IoYuf4Sq7B24F+TjfxNy+bWl8n3H3VfKnubNUWrNnVGBAtbZrJf5PkYnq8SLTv+rqm/Z0/xfR7/fVCyX78EfqiNr1QOvnslrFTWsPGdysfmXBP+eUGBLb7wtIuYb5dSTGo/1PGRNnmcg/LMeA2Mqr4lj/nOfnOCCjry4kDEJ7NrR1pCxS2UJ+OqfjRPj2LhXyD2Io2FpKljzROuZoLWrsSnyPDUsf6NmqzE80+5h6mdBQacUFjfCKNL69wzARMGLuta04xSlhImTM6kOFmK5ueVgi8UtVrHml7iGJnQSV02BDA6IZQvw7+aQ4MbsskdFa5j1ZySwm/DItNf1ddRVLkYZPtmP4Q07cwGN2N/HE1/ZKyiBcbGOK3cyE3IX3YZLXPPbCNaLN+RAAmQAAmQQAcJ2HlcotPtV2vnJElzBfs76s81tiqzOjJJ5YE0S5Giy0PMZ1lMPI4KRCIwuA6ZfBZxyGxeyYyXA9RveeL7+qp+Ml/wHpbJsVKfaYMVIPyoLXXw9hbU0i8j1jU4TiLBJCo7hRl20ydp/CH+K+MSsYO0/up5XHL+zyE3mkkSTSel3bDLyoJ0EgnjZuacoWOtc846ecDc5uklQHFl1xzSrTYJa9diu26HXqLXKnl4RADx227H1r3v+Ifo3Tldf9Ne14nL7uInJ3zi+9Zq5zyTMiVYWhc5y4xFkJvY81Uihya8aebvySmJflXS74RXv6S8CTfosiX6y/O886Re92872kX9W7WsL2FJuFugUiciKXSiX7bmyxl9IznwJlpeN707EPHJ3KxThBk/GZe9yQzcsyF9lt7W01mdKDK8KO2kxU18Fjk2i2DHOntB+z8Wpl3hDVsGKmr0ngiztYHK4iyWbHJHOUetp1e7lpk66lhbKqDoJ3zy1tFG6wGgn8wNYPap/wSlitIXGQRJMiWyy1fHW4l8SmEhy8rCyZvX76CjvsCVnphOXYx6t42EdjrF2Z06kgWz+lOze0BkNwPN6WaQYDMsq47VUgkrdo/krVoFpbmlMMGcPVCvuQ0msDYx6R0vObxKkv9wDLnbRXN+Gre0yCd/Rz1Vt1XKI7Zty439cPg3xv2wjRAkX5EACZAACZBAhwl0VnzSDy1VZHp+GtN/dx462V4ZRygqSAWbjaREPkV/q83cTCXgNSkRQlzV5QVM3yumRx3YBLVJiWMRJCzP2jmlRHF5O/UqZ0YnR29+nGwAE2x4Yuce4ZxP2r6LPkkRh/hvW+KTLGGUh52ai0pMrnamdh66i/gUmzdKQuUMGo2bsePQlxHRNes/nLfRf4k5WlXbutYmxf7j8/6tZ0VMzxVQ+j4qIHelCdpk08nJucUuHF+5RX9Tba7gb6il+79eRnFpFUlBeSGfBPFJNmxI8ItUbqOsG2jh+yphwYmvjK0m+3vuCb5/7L8PjvXqTz5ONgMIr5FolFFQWuRFW9rF5oreNTAQ5YKSqljK53DrG6tp7IEvp+/5n9nNsIJ6AGyuorS4gvVGm1y4x+/j6wMRn+BsfTirch9tqER9VaypbQg/U7mTnJxPkmVfLWO7F26TmPvXLbMT3JJOehREQuVGR5C7vYDy+oZ++lOtLOBWNoOsJFVUcFOEgx2JTyqqSrX5s1mUqzbhYLWChdsqwbgkGrS7hWRHUHpu2lXfWMPSpPrBCY0/fpHIebewUKmap1nVVRTzV73M9q0KQ0mWZaPCRotYte3feL6EidxN3FSTs0AoaqOO2gpGFPPbC6joMjew/mQBt3IjGInsHJjUHkmWOIaiCre3/1Q+qukRs043sj2wLsLh9GQdG+qp38Y6ynfVtpuOeGd3/Lg5U0bVJjMV+xhwd7eR4yZLlskG1pbM7oy3vKT1voCpmuOPo34/OIKRnDuO1kYitq7OtuNxe0EvOwhuGt4NVT3t0LvqZN0y99o2kseHn5IACZAACZDA3hPotPgk4kw8h5PqS/2JeUCm5i4reu5RRmlK5X0y4kD4sFP55SaKKvPZGGbvFVGxeUple+3MtSGYba8rWLk3hKtKXPhsAWv+s0QHotmZ2eQPHVFL/vQyimkMXTPtjcxj7W65OsXD/BLKFdlO3RPVguNuYuxrNYeWbbhNe4KIgFTxKdwyfCd9crp3KF+2JT6Jv5DJIJtTdjOLiS+uYuDOkH5YHeRxEvFpcAhDf7+JsXve+Lp2lDBues7p+BZQKxnUVuzOtu9qu3mdO8gVDhJGqFttUkXkKT8jk81Zu66gvDht8yG5PmRCp7roI+F7M1/EyjPlv2zo/ETFsRyymSxuLa47rW3F3zSBCzczGdwMfJg6qs9KZklxgoDkVBD40u69Tn8f+EVLWNM++wbWH89q3hHxLOarREv33xlb7Zz4ZDaauInZx9Y3rFdRuXvL3tND/7sV8ak97UJ2ZLyJicVV43MGfr8jKO6BL4fNMiaszy3ax8Z62eoQ8R0K/TE4iPcHIz6pntbXUNIXl/lR1ap/Jov+kQKCXe+FyC/rKI1JgrAMrt4pYHXTV2flfRXVb+0NSP3Y67xKS84uIXstPhl1sXDnqnlyoevM4OoXXqLBX9axNKVuJmF/s38bQdFR533RwnS/jlWZuKSVvZvIJ/VwY30J0znzw6THQedAWodeJ74T8SmpzM8mUK5WvTJlgKN/zc0o5KTbpBIL5hew9EzExug5SozxOamJUiGyZTBQf1oIJnLG5tQOiyuxnQrr3xcx8jeHSbYfI26yU/uD37L4pG74svbYjmMsGaXt0tbzYtDGIJQy8YYe73PM7nZpGz5lvicBEiABEiCBzhDovPhkIsrV/CJcuh/2ZQtrX1uhSP9OZ5GbKqP6dEHP76IOWfRYd0lK/fsSxtw5lZ3brrbwBHqjUojOPVQ71PzjXgUb22FL9avqCqa/cOeeWfSPlaLzXXVg0nEjRay5wSF2ThM+cIzWtZs+RUs6XO/aE5/UWJSd+bayrxVUZWljsIwovA7W10sYs+KjmrMa/8dhmDBucfFJLcesYuWrW0YElTno38ZQWm+ghtpqutUmFcuFBN9rqYU+OQQP/GUSX+UfFmLOcIv+ZpIPluQLJ/ZcfOm4rxXzFX2/SJWX6KskVqQ/7LT4FPcN3Xt6m+KTanE72gXqWPt6BCYPtPFpfb/fkNmdL6fL2FxFTIfQWompodv+f3Dik5CQ9enBGnX5IuGvOrb5fdKcKOW2enxCdW1/9IuNfPJ3unML2pLtWNtsmPSnUdluPTt4bdZ+19FmyxrW1HA9ecMzd/qlzXfQxFCCvvqTOa/a4Djv83be+qJiq0y2mvQhaMM+2EZQF1+QAAmQAAmQwGEm0NZvqppzpMyaZE6Y8nVDhHJuWtnuyXJss3paPc4t23+9F2X4Zb6K73+pY6vJ/NPFoueNezH/F9tuxa7cBqjXMvatnCvH7odNil+1F3z8Pu/je/E3Uu8nbluEb7M+y3HNxsEtu8lraeceFtmkxj34eq9tJLiOJL9eoza25pcG+fqajWnDqlrQIRqdv0/fHbz4tE8dZTUkcFAEfPHpoNrBekmABEiABEiABEiABEiABEiABEjgIAhQfDoI6qzzlSJA8emVGm52lgRIgARIgARIgARIgARIgARIwCNA8ckDwrcksNcEqt8WkLZ98l7XxfJIgARIgARIgARIgARIgARIgARIoNsIUHzqtutpOXgAACAASURBVBFhe0iABEiABEiABEiABEiABEiABEiABEjgEBGg+HSIBpNdIQESIAESIAESIAESIAESIAESIAESIIFuI0DxqdtGhO0hARIgARIgARIgARIgARIgARIgARIggUNEgOLTIRpMdoUESIAESIAESIAESIAESIAESIAESIAEuo0AxaduGxG2hwRIgARIgARIgARIgARIgARIgARIgAQOEQGKT4doMNkVEiABEiABEiABEiABEiABEiABEiABEug2AhSfum1E2B4SIAESIAESIAESIAESIAESIAESIAESOEQEKD4dosFkV0iABEiABEiABEiABEiABEiABEiABEig2whQfOq2EWF7SIAESIAESIAESIAESIAESIAESIAESOAQEXjt119/Bf+RAW2ANkAboA3QBmgDtAHaAG2ANkAboA3QBmgDtAHaQCds4LUX6+vgPzKgDdAGaAO0AdoAbYA2QBugDdAGaAO0AdoAbYA2QBvohA1w2d0hCmNjV0iABEiABEiABEiABEiABEiABEiABEig2whQfOq2EWF7SIAESIAESIAESIAESIAESIAESIAESOAQEaD4dIgGk10hARIgARIgARIgARIgARIgARIgARIggW4jQPGp20aE7SEBEiABEiABEiABEiABEiABEiABEiCBQ0SA4tMhGkx2hQRIgARIgARIgARIgARIgARIgARIgAS6jQDFp24bEbaHBEiABEiABEiABEiABEiABEiABEiABA4RAYpPh2gw2RUSIAESIAESIAESIAESIAESIAESIAES6DYCFJ+6bUTYHhIgARIgARIgARIgARIgARIgARIgARI4RAQoPh2iwWRXSIAESIAESIAESIAESIAESIAESIAESKDbCFB86rYRYXtIgARIgARIgARIgARIgARIgAReWgK//vorFgpFjIxN7ck/VZYqk/+RwMtMgOLTyzx6bDsJkAAJkAAJkAAJkAAJkAAJkEBXESgUF/dEdHLFK1Um/yOBl5kAxaeXefTYdhIgARIgARIgARIgARIgARIgga4iIKLR/L0iVr59sqt/qgwpr6s6ycaQQJsEKD61CYyHkwAJkAAJkAAJkAAJkAAJkAAJkEAaARGLlPC02/9UGVLebsvi+SRwkAQoPh0kfdZNAiRAAiRAAiRAAiRAAiRAAiRwqAiIWETx6VANKzuzSwIUn3YJkKeTAAmQAAmQAAmQAAmQAAmQAAmQgBCg+CQk+JcEQgIUn0IWfEUCJEACJEACJEACJEACJEACJEACuyJA8WlX+HjyISVA8emQDiy7RQIkQAIkQAIkQAIkQAIkQAIksP8EKD7tP3PW2P0EKD51/xixhSRAAiRAAiRAAiRAAiRAAiRAAi8JAYpPL8lAsZn7SuDgxaftOtZ/qJp/P2/vbed/rtmya9hsVnT9p6Adm/W9bUbnStvGZtWyq740je4cDpZMAvVnGMzN40qxRhYvNYFfsSm/C7y3vdQjycaTQDcQqNdqqKl/O5wqBee3WEC7x3cDI7bhAAls1Y191mrYmYmG57dmou0d31F7rttrc8d9P8Bxa1I1xacmgPj1K0ngAMWnbazPzOHtvmG8dib8d2zgMdabCUWP7+Nte86x4Rfxgdv+ASPX83jdKfe1vnH03f0h8dipgfHosWeG0VI74qXt3yflBzjxUchNMXz9kxkMlH/dvza8yjWJaFrbKW/rXHfCsS7MmWvq+uPuHaGaEk1bEIXb7MH6+KTpe988Fto8dy8On7pursnThb0obQ/LeFrEMXU/3Ceb2DGH7RoeDM/EfheO/GkcffP+vf4FrmQU7zlM7SEqFkUCJHB4CNSXP8cHx3tw9OhR+68HJz+dxOpWi338sYQbf+h1zj+KnpMXkP8uRSJIOL739xcw+SxeX+matCnh77VSeMJaHu8H7U849uhRvD8WvT++GHs/0uaw/0dx9A95RI8Oq+Kr/SZQx/I/P8Bve5xx7TmJC1OrLTek9s0NvP8b5/yjysbzeLKZXET8+F6cuph8TdQe+tfPUaTZc1Bb7Qnyn55Ej7bZfjiWHByiX9SW8fmZ39rjbPt/c6qtvkcL7L53FJ+6b0zYooMncGDi02ZhDm8qZ6hvHKeHH2Jkuoj3zhnH7c3rj5FyzwS2n1uHwxwbF59+wtT1Ue2Avp6Zw5XpRxgMnJlRnC785FCvY2FAjp3HwPwjjMwvok87NMN4c6Cc3g6nlH1/+fQB3lGiXV8evblFjMw/wkBu0vKcxOB/9r1Fr16Fu3bmH+O0sv9MEU/3mt5LID4ZcWIcV/a687UKBnLzOJ8kNO8154Tydiy6JJS1px/t2l7ba82OOGw/x8B5c1/X97br8+gbmMd7fwkfJLz9j++cezLFp/ZGhUeTwCtGYPkGfqcc4J6TOPvPSRQKk4GQ1PPHyeYCzI9zOKdFgV68fy2PuUIBk/88i5P6s9/hxrLHc7OE/hPKie7F+xeHMFmYxNDF99Gr23AOcz+6x7/A5B/VsadwbuAGbvj/ZhzxoVbCkP+9fX/hD7/VItMHX0blpOXrSnB7Fx/8b0LZd0pgbLA7Fgf3evnm7/T49Zw8i8+nCihMiZDUA39Mk1pZmzlnxJvfvI/+sTlt459/ZIWfEzfgm2h9sd9cE795HxfuTKIwNYQLVlzt+fNc1C6C6+cULumyjf2ba8q3Z9O6F4V+nFLXR09PY/GpvmyvlR6c+l9zbRWmPsdZff304MLdFHE3CUIXf0bxqYsHh007MAIHIz5tf4e+s8rJGMeVihPmtP0U5z9Rn+dxvpzEZBsrw+M4cmYY73w6rgWmmPi0OG9EmEwRK07RkGipT+5jRYr+YRHHlQDw4TwW3GNV+z5U7ZjEYEKwlJx+UH8X/mEEsxMzrpAGSNTHW7f22qM/qJ52cb27duYpPunr/5CZ6o5El/0w813ba3uN3AmHp8Pmnn7k0/t44HtG5aIR3M+Mom9RbtYUn9obFR5NAq8SgRomP1Tizru49I3ryK4i/0clzPTgUsH9PM5meeBdLQy8P+oIQeqw7z7HKSUofTgZcdaNkNCDD8aix6+Omiikd2+6UsAq8v+jnPT+mEAQb0naJ6sY+r0qIy4ElC6rvn+AyagmlVYQPz8IAj9O4qyyo3cvoeQ+cX+WxwdawLmExia6jBvvqnF+H3kvsu7J4Cltu2en3B/TZdxQ4k7PB97xq8j/wVwrrqBqBMy4ELR625TtR9vBRuj1/uEGCi8K6Fd9O5oc+aREMxWNd+p29FrBv4fMtdWKOHwQY9ZmnRSf2gTGw18JAgcjPi3O47+U6HP5ofMU2/DenJnWotJ//eO7+ADYiJ8jnz7AUxvd4YtPoTDjTyrqGLmsBCXlvNiixSGLRZ/8hMFLVhxryTl+joG/5PHWR9MY8I/f/g4fn1PfzWNKmlR/gZHcNI5pgWsYb3wyjY+nn0VZPLiLtz7K48SXT80ylD9JlMw2Fv5h6hpc8xBJf9paWrON9YcPcPr8KN7Q0VSjeOv8HAYeuj9YLzB4SdV5N7aUaSGnPs/j4wdOW+zSGenfm+fnMbW2DXPsNKTd/nsp4emX07bvdta0togTH+XxVu47bD4q4sQno1qAlHKx/QOmBibxlmLUN4pjVx9gRVhLoW0xf4H1u/PoPWvqUeNz3skhpNttv1P1qf6rtgX/eXW9/uE4juce4altk+nfqFnqKedfWnQioOJj0nu9iIU1cbqDmoBnj9Bnx+7In/I4MVyBiirUS1lbtgNnfLefY+T6JN60ttB7/QEe/ODW6xy79ljXrZa3BsvMvL7HbNuOpS7/zDDeOOvacpOylV2NzwXjcuRPo4i1z7EVQ8kps1bBlavjTt8exu0Ev+Lp/F0cP2fGRzE9nnuIFfdy0AWr41wbmcGVRyrqUt03HB7OUKW/bGW8/X7k9fX6+oeTdjnxdsRm3zznLcF17w1qya69htT5p8cr0XtP0L995LD9GKf1Euz0aDgR149cfWTbmyI++TZ41vQxWM5dL6PvE2V33kMH1e//PDT3mosPnIcUFQxct/eXM8N48/wcrhRbfCrRrC2adXz83zo/j5Fndklvu+1NNzR+QwKvDgFZqvb7IXjuLbB8A+8qx/hioUF+nRfWIb+QIADId664U0K/Egz+b0J99RdYLhRQePjCqa9knPNdLIGr372go0t+FxG11BBL+5Id/1fHCLq7p7I08tSdmIVChM+GEUBi40l2LN+5Is5iv7aXk77gA6C+toxCoYDltXDybJaFvo+872ss9mvhKCqmAlibw+djT6yNW/tOEZ+kvieRaEA1Xru/Lrpp1Ck+ddNosC3dQuBAxCd5wv1OPuGRjDhJSmCKUKpi8KIVhFS0VKL4JM7IOAb8myWAeL3PTKRV3zRGXOey9hAnlCN09i5cTSXSHO+NlO2LYXhghLbAYao9xmmbq+mNT2fQNzCDd5Roopwad7mh7d9bmXEt1Gkn/S8+k2gjNqeNcBdrQ/SwyLuneRNJppa5HFfLXJSTpZ3AUZwuyo+QcI3nVok72+GyRyXM9F6ewXHl6J6dw2lP0DPnxp3NGEuxifOTOC5ijRXuXjs7h4+vj0IJEUoEElEj4K162ybzYxeV+GJFJRGZnIiLhuLTz2Wc1lF9wzDjO48TdhmnFk2VHWpxLU182sZKftIKU2ZMgmVHfV6kYEWiQYyA+d5lIxpJVGDr+X1kfKdxWrNU9c6g12EcPjyTY8dxTPVLcxpFX7FFzk3EJ5PDJ6FsvMDAp+Y6ef2TabynlmNdzGsRUtnAws/WrMVWAuFN2juJ4xeH8fqHRiwN7OTiItaDK+InTA2YMlWOoRMD8zjxqREgVR0hgzACU/X/7b/M4MT5UbzeN4537FgHYlxQdtqLVsc72g8j2lkbOjOK07m50GZl3PpmMOJzuTSn7VOJgmIvsXsPDoBDgwcSaeSUgxXL+eRc68ZOwvvr6+cf2GjYbUckdIXVMHr07dv2MXLlAXr1/XBYj3Pf9ekgH9U7w8/Tm6a+cdvy4SROqHMT7vXr09P2ejc2F9yD+8btw4w22tu4RfyWBF4dAt9cSnaQNYESLumIk/iypBCQjSo6eikhZ00dhYsqquMo+uVh5r+HcPLoURiHvI4Xy5NmqdydSSwnTHXxYhIfqDZcLqD27wLy/1TL4z5HftEVqMLWxF+lRz0By0YI+588Vl8sY/KOWXo39P+eoNZqrqt4hfxkjwmY6LR348s3VT0N7dc2RKKELidkVaoXcMGLPFq9fVJHAuroJiWITg3p5Z5DU8t4kWAXtamz2sajkX91lC6riMB30b8oPkISmMbiU9IZ6rP6N5e0MNzzv42E4bSzu+9zik/dNyZs0cETOBDxKS5YuCDscqSPnOVxAFRElFpu97YsKUsUn+y5aQloRdCRMlS14lz02Yn/wAyOKWejbxJXHje6sbptVk/MH6A3lsNnGwufmSVy782pslTUknkfcVy2axj5q/p8FH0PrDNk26pyYl1pJYm4cnSU6NHXTs6nZxi4NI63PnEda0AEszAyLcHJs933x1JF3eiotsx9J6rEcdbVUkurKrYtPrniy3YNgzqSbRhHLj7AU/EhRTg8M4Mp/dkOmH84g5Eg2mcbK0N2OVAgaCgFKTmB82bxLno/GkXvkOuYVm0umzzOBznAU5bd/WcRx7Xw6Y1J+b5JGH3+gRVLpMxRvDfvKKdiB8oW3fZ65hp9K+M7jCOXHzoJ/0Mh8c0gEjE8VomlQTRJO7YNWOc/tAXTnrSy1TKHRR2N9LYr0Ko67fV1YsZeq7FxCcs8JoKCqqz2GO9pkcZpg12ye+RTd8nuNtbnZ7RN/9dnNrpNrvWInUBHnOk8du1EPrU83mE/wnvHNpRwoe6L6rofCCLjfrJRnsN47669MITLmTR7GcXHj6xVHAAHJYIrEexY5LqJWmn8nTARUTy81iNjHdxfh3H8y6opRh4KRK4RuaZk2bc88PByBdaf4bwWGScxkJpfL2xL5KGCylmoRVSJwBVR1Str8S6OfTSJ05JkvaX2xgnxExJ4VQlIVElsaZAGIsuVGkUGicDkL9sDEOSCcsQnGw3y/mjJLusz4pRJ9N2Ls3ckIsSOyDObRDzIjRMe3/PfN7DsLsNKGETJ9ROPelIHW3EtoeyjJ85hzluilVA8P+o4AYlOS4gsUnVLdJ6beN5vkwhM/rI9NcWRXFBO5FEQyfSNXdanxSlrd785i6FYEv1VzOnE4b049edLWqi6oBOE9+DUzWUnis9vmHrfuvhUWzQi2A2bH60V+0+qsRs/o/jUjaPCNh00ge4Vn1wBSRzFs84yiV2ITxGn/IcKBv5qIyiUE2f/vf1Xb7nRi6c6sbdK7u3+exA80RLHxXFmJXdUkFPK5pLyhDVtBI/v460zwwiWG9r+Be8bWYrj0PgJ1Z8Wo+3VbS8+T1hm41RQscJKsBzRd/LCY33xaWHAMDRiW3gcIHm0Qj5ti0/eMk2JkAqEB12dtFXq2QPm4rgHPNLFJ7fH4ettTP3VcAkjYpLFp/UvzW5t0T6pkmTZqM1DtvYA7yhbdXOY2QpleVJo53U0tgNh5opjtrCfH+KEqkfZsP5IjvXzobXBuan45Jdt25LwR4TBINpPxioQFdLaKwJYuETOCMMJDPAU51W0omUgQkkQHRO0S+4BYZn4+TkWvHuGugYXKkYsa3m8gyif6TCaSdUr/U25NmJcEu49Yi+SK+4gOMi1HLRXmMoySrW0Vf4FS1RlbEV8sjboRnxJOWW7Q6pE1Mq9uW/OitTOAwQReCUnoMdWFSnLwwMxS+oJ/qa3ZbNS1r8hxgZEfBrF6buOiByUY1+00l7/HL4ngVeYQGPxSRz/RuKTEQBMcuVT6C+solar4cVyHhdO9qCnx+ygF0Q+2UgV9XnPH4ew/KO5x9eW8zinkyh7kSL/nsONgQs4d3EIpWf24cmPy8hnTALqd6+VGjj3krsnnutJD7lOUH4J5/58A3Pf1Uw59RcoXD9lkkDvYqnfK2xSe9x1scEU8UmWzTUSn5SJSsLy/+5H4VkNtdoLLI9dwEllhzoxfmjjJtLK2O4Ht5dRU2a3VcPy2DmThPzdfpSsKZrO1vFkqt/bSe8ojp44i8+/afB7pU9uXXySa1ULtSfO4sbUk0gutT0Gv6/FUXzaV9ys7CUhcCDikwgUoSPu0Np+jPeUsxs4SRJ94S4DQ8qyu7LNGyLOiFOuenl3RotL4mTpZRE6WiiP91TOJXXTrf+ElekZs/TMXWojkUhWnBKRyu2DOJK94/bpul1KEkRMiGAgS7rEmVJ/ZXmXOM2J4prXH/U2EJ6GEUZEyHHinIWimm63K6LgV6yvPMKV3DxOnFcOnizlkRxTqiwpJ841Kj7JcSL8SDvUXxFPwu/aFp+EjS1WHFZ3DMK22nr2grk4+C43+cxrk26aEhzG70Mt0TmmxtYutVHsw7Ymi09ybcjysMDhDpYU2n4V50zES1L9VsgMxScZlzQ7kO/j4ws8xxW9C6V8l3JsO5ybik9Sl2s/yoSqeDBfxMcDM+hVXGV5mYqYGbYqcGxcUtobtEHGREQAyUPliB0f2XxoVhCXMQrHMmznyq28vscE30l7vPuGtFfKajreadeglO/ZgVwbUk8gUnnH6ZaL2Ky/OxgOcu+MRT5J/1x+wXXoja0cG3wfjgsQj4o1IlsYbSptCO7fco0l3a+t7QV83arUa7keEtsSPTjY+fXMsF4+3Hv5Lq7MP8V6xAmAjZpt0N5osXxHAq80AXFodx75ZPCtzihHPoxKUonKT14uYNkmEQ/EJxv5dLTnAgp+1JJEsSTl5omNkkRlncNcin8vUS3JUU+xAp0PwiTsbmJp5wC+3DcCTcQnsZkm4hMg0UmOjfacxKW7yzZnmSM+XTPH9CTYYVKOKRG2fpfJB2Jq/VkB/f9tEvY33o2vdfEJdSWa1VB78QRzViBtaTfKfRurnVdE8Wnn7Hjm4SVwIOJTzDFy+YojZJ9Qbxbn9O51b1xaxMoPVazLv2kjJB27VdGfaeEocNBCgcMtWqIkJNeUtCMewQCs3DZOZPBd/aewbmnDD1UjWEklPz/Ce2rJlH1yHjg3khNAnKMPx3XOGrWNeOzflI2HbkV82n6OwUt2Gd+tpKTB29isOsyk3VXxalQSY3O+zt1zXuWguo+BqXmzxCtwnDwnT/qb4MBLzh5ZWhceKtE/4djsi/i0F8yljICHE3HiOfObi3eDnDBvnB3XebQ+Hn6E8zrflQgdikqy+CRi3tt/SbANbS/3MaVyHYsY6tWvecs1FHzX3A5iuXOCgZOxn8GINht57wlEwqgV2w7sJrQFU11K2erLpw9x3ObSUkJN7+V59N1exOCAt+ultCPoe3qZwtoIRXLcKN75axr7RZ2EOnpeAEq/kHtMID5t1xPvG+s/m+VwUlbT8Q7ubSncg/6a9si9LRBHYlycdkfs5WA4QHI+RXJwOW1UL+XBRHAdSlstE+lj8L17flx8go2GMg8HZImdE3Un19gn0/H7tL13n79vHzS4VanXDdviHwzgWRlXBqZxTB5CKLFNLaV0d4Nt1t6EYvkRCbyyBBrlzAmWKzXK+eSQk4ThTkJmsxOYs8uYiE8Jjn2wBKnFiCMToZISEYMmUU9Os5Neys57gWiWdBA/2xcCjXI+qWTyKhIoltQ7pWWSwLtQWMYLPVdz8n7Zc8yyu6NITGIuy0bH7IO82hzOpeVF22zwXdC+NsSn4Bz14nAJpBSfIoPLNySgCRyI+BQ4GgnLGfwlIOJESaRR2l9x9ozgM4z4siVZEhMuqxHHT86N2IQ4Hp5TFzkm9qaOkasqukTl70hYhiTO07liuJNSrAz7ga0/cB7947Z/wMhVIxy9NfC48TI6/1x5bx2+I39ZdHL3JDlO4uRJHiUpQASlUFSRSI7TBUnCJMfapUtJOZ8qcoz5Kw580Hdx5LyxENuIjp+01Qobe8Fc6nedWvks0iZxYPPoW/wp0qm4rSWLT9L3IFdPpBTnjdQvS4Scr2RJUBj55HyZ+NJj5h4j/PrmvGV3nggix7Vi2zsQn+S6Pj4e3WVMbCDdVqRvXnuDNojtii2rpO4ugPhrqTOIkAkOkeg+KTP4IvVFy+O9V+KTa8O2VWIvZonvwXCALCtT9wdXcHHIST65I8E1542t2GCwRNQ5WZbQRezTbjihjle5t5Tg4/4miSj31yBRm1Ngk5duW/xboTzIsAJkrKR6FQu37EYQbnvQpL2xgvgBCbzCBGTZUoPd7polNX6xXEDozLssbXTSu454Jc56ksCU8J2IBfHdvmqY+7OKUEkWn1qKevrxSWznMmn98k2VLNrJVSVf8O++E5DovPTd7npwqSAPixOat5W0i6I9zkZOueKVsh019knRgLHv5PpJsmc8wef/R9noWUzGdquTdjYSn+p48TDt2gJEJDsMAinFJ7EH/iWBkMDBiE/K0dBRDHl8/MhuJ63aFCRLloSvgOTHcPMs6deDJjfOW9cf6PwZQe4lmyz3tU/u4oFzzw6WNjg5csSJ/K+/+uLNT5jSCcCHEUQ+hcwavyqY5VC9l6d1kuLo+SJOqaUTTr9VDpHCXZy4Po/zRRtn3Uh82q5h6roRniLJbBu3LP6trSNYhmiPUM6oTmIcOKrhUpwTM46o4uy2JgLQ5tyMPbd5wnERqoIk8qr+wAYaLaUyDZXxk7rNp+KQSlTNHjAXoSfg4Qh0EedQ6vby8mw/NbsqJi27C5aXWvg2uiGSRF19tf0cA9dn0Jd7aBO5W0f0jBJTHe9WLcO0O661Lz4NI0wsrivFyrBxgsPdA6WPvpjTBudA+AmF4OjY+WVLfib/+DCx9u7FJ0Bs901JLG6HRNnkx5dn0DdUNsneJX+Qupc46CFiRWScpZCUvy2Pdwp3sc1AkDH1yLUR49LAXiRP24FwUJeU7Lx5dgYjQfJ0y+3ZQ7ynfzNUom6B7jMRG1QPH5z7FLYhyyGj92PZAXUUJy6b3xNhYGqV3VDdZO7qG7VD4RxODNzFoGwGUX+BhfkyVoJlMultkST5WqCvqoinGfRefejsuqg2wrB59yRHlcVgxjWtvfYg/iEBEohEUFz6xpkMYtUmBPdyMNVWUSpEc82s3jmlnfVT3tb0q2Mf6NxJUdFAIjZ+5+0CVkfpmsnj5B4fRLZc9nI7SSLyE5/jSWwcW4x6sjvvHf1DHqtuGZslXHr3KI72pC/pcw/n6w4T+HESZ3V00SWU3KWaz2xCcC8HU+3fJRS+DX5koJbcDf1eiUCnMPRvt61i497nUt+J/mh9myX067xkzvEimCYtI5Vd9lzx1a1ev24kPimByeRMi+6kp5yhhLbEyn55PhDxaaFQxMq33+3qnypDynt5CLClJBAncDDikxZbzHK61/ry6B0oYmB4Hr0qqe+ZYbw5UG4eyZMqzkiOqGEc+WgSfcOLuDIwaXI4qS3Ji84k5Oey2SHuzDBez8zg/PgjjIzfT9lePQ4v8ZPgCb7qiwggzpEi2Kh+5xa1cDZ4e9q0L9haO1xWFTiPThEPcnap3Jk8jict3bttnWTnnMSXPzw0O6v15XFa9X3+EXRbPhrVSx1fc8QWlQ9FC1Iqv45a8nR5HG/0jeNYbGv5kL9eyie5ec7O4bReeuYwEaHwzDDeujijcyS93TeKYxmz5DHoexMHu7H4ZBy5d/QOhjtkLvU7PCBLLM+Movf6PD6eM6HKMjZvXrqPQZ1oehF9mVG8edaMWdhWEY+G8falefSNfmdtPuT3emYOV6bVuKgy4tdGIBKKLU0XcTozjDc/HdfJ69sXn8bxzqdqLO5iYP4hrlwfD7aAD6NRfIffsaxWbRvhstYj59SSpgdY+FmVk1627Op25NwcBjTXhzh/yeRKU/eMdFtJLzMWjebkTzMMHmFk+j5O2PuSLNcFQtFL3WM+Hn+EweE5HOsbxTufejmfHDzJL1sd75R+iG22Kj5lxnGstmCOtAAABsFJREFUL48TtxcxYu1F8TviihwHwkHRCVno3wG9DFjlogvz0L39D7lOUuxFbFBdl/r+6tixm8NPBkN2LtTL3Jzk4/b74KFF3zhODz/U98iBAXtdnJ2zdguIkP6au2wwaMswjl0vYnB+ER9ftJtbBMKlREsO4+2/qmOUzRXxns6z5uzO12J75TD+JQESUMtZZVevXrx/cQj5sc9x4Q+9WlDq+fOck9T4BSb/aPLhuJEioSPcg5Mfubt9HUXPyc+x7EP26pucGoLZHUwlafYcfiUc/I9xwHv/cAFDU5MYunzW5pfyBSxTUW3qrG5781xPdRT+1/bz5FncGJtE/p8XbOLoHnwwFpGk/F7w/T4SECHz6G/ex4U7+cg4nZtxhKYXk/hA7073Ltx8XfXFfpMsvOckzl6+oZPYf3Dc5GQ6+c+YhSJa3yQm71yAOf4ofuclufePLRTmkB9obKMhusbiU3ht9uC3Zy7onfRUAv73f2OuQ78tYbkv16tCcTEQjEQ42u1fVSb/I4GXmcCBiU/q6fH63XkcU6KAJJNVzluuHF0ClkY3VXxSD6Z/wJQ4CLbsI38ax8cSVeSWWatg4LoSUrx2XH+AB8593z2l2WtZIiS5n2LHlx9qkSDotxK/PpnBlUfO0/oG/ROn2T0/8toVSWKVRz/YfHA3MgZHPprGYCVpSdhPWMhZp0s7aoantCUUVRT/Gh6Mz+P4OZO0+djVIh78sA1zrCM+qQiCL63wpsvM48T4Mx1to/qTLiiYPkh0R6TuQMBw6wGwG+bi4HtcI+zE6dR5uKK7Jx7LfYcHwyY3UaStlcVA2JCd1HTPtmtYuC2CqbVLdW0MPPaujV+xMj4T5JhSzN5SudEkasITJKIj775zhI21h3jPii2qPGUPEbsM+Majk3SJrXBWB0Y4hdvOp+ee8uxP+np3Tt8/0m3F6Zvb5SD6ylsiV3+GQRHdgvtSHieGvZxq9WcYuJo34pzY7pfPA9uNjLNXb+xtS+Od0g+xTW+s5dpI4rI+M2PFeGNbb10qxu91B8FBg/kVT+fn8Y6TpF/Z4eufTOPj+WBrUYswhUn5AU584tzPzwzjjU/vYuEHiZhyRyAUf4KNIdyv1e9U8X7wYES1Rf1749N5TDnRWRK1FY0cVPedhLZcvB/lXavgimtLYk++zel2NWtvpPF8QwKvPIH6d2Z3Or2Tlnbee3Hq4lw0Igh1LF9X0Uk9iDj8it6PJdzQ28sbp/joUXV+Hk/cSBWX8rM59FuBy9TZg5Of5vEkaT65+QT5i6fQq9tlyu/9/QXkI9EtUrhd6qeillKXOsmx6m8NpYEP8FsnWXrP8Q9wo+DfR91z+Hr/CdTxRO9OJ/Z1FEd/cwoXZjyBsL5sopMSxr/2zY1AQNI2p84fe5K6W+Lq//N2sOs5qY9PMtEXBa/so0eRbqMuvSbikzr0RcG7tmzfx6IRiG6pL9vrX3/9FfP3woil3QpPqixVJv8jgZeZwAGKT4LtV2zaRNgmabh8vgd/JbfGD46ok1qstKOVY1MLae+Ln2smGXHtoG8kbfRdJVAOEpa3011xmjxRSBeh6t8n7nvOXCXzrmHT92ulnrS8Li66mpe4PvhOxiWh/OAY9cIet2M78p14m6B8R+NsGyb9b9YmdY02O8btq1zT7Zzjnt/q6yBReBO7tO3Zm3tXq+PdaicaHCf9a2afclyz63NPOdh2iw01a2NaN2t2s4Wdnu+X26y8n52o2nbPVceLbTdj7ZfN9yRAAk0J1NVuWrUa6lvph9YbXMLYqjc9P1Ky3cGrUX3h8absWpqgFR64o1em7406t6NiedKeErA2UGs0TvWG9otNZeONzo82uC27kB3pWi8+WlnDd630vWEB/JIESOAlItAF4tNLRItNbU7g2SN8/Jc83r7u5dGSJShJyYCbl8ojOkrAF586WhkLJwESIAESIAESIAESIAESIAESeMUIUHx6xQa849118sW8/uE4jl93c7aM4sRcytbkHW8YK0gnQPEpnQ2/IQESIAESIAESIAESIAESIAES2C0Bik+7Jcjz4wS2f8DC8Bx6VbJx+6/38n0MlpNWlMdP5yf7TaCKqdvz6BtYxMp+V836SIAESIAESIAESIAESIAESIAEDj0Bik+HfojZQRIgARIgARIgARIgARIgARIgARIgARI4OAIUnw6OPWsmARIgARIgARIgARIgARIgARIgARIggUNPgOLToR9idpAESIAESIAESIAESIAESIAESIAESIAEDo4AxaeDY8+aSYAESIAESIAESIAESIAESIAESIAESODQE/j/9Gfgg9uXYHgAAAAASUVORK5CYII=) # + id="hptai8UWR6EX"
111,710
/Software_Defect_JM1_Data/soft_def_prediction(SVM).ipynb
961e103614a928fa56b457eb04ca6ba80628ed94
[ "CC-BY-4.0" ]
permissive
lifelong-student/SDP7_ML_BASED
https://github.com/lifelong-student/SDP7_ML_BASED
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
1,783,357
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/hamidriasat/Natural-Language-Processing/blob/master/Incident%20Severity%20Scoring/Incident_Project_XLnet_with_summerizer_Finetune.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="Qm_xTUOjrwAh" # To get additional memory/ram # a = [] # while(1): # a.append("1") # + id="ngusBU9qQQNc" outputId="6acdd138-ad7a-4e1c-e137-30500f84995f" colab={"base_uri": "https://localhost:8080/", "height": 615} # !pip install pytorch-transformers # + id="iW8eS2oPQVLz" outputId="92f5c1f7-cb66-4642-b84e-829732772f58" colab={"base_uri": "https://localhost:8080/", "height": 80} import torch from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler from keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split from pytorch_transformers import XLNetModel, XLNetTokenizer, XLNetForSequenceClassification from pytorch_transformers import AdamW from tqdm import tqdm, trange import pandas as pd import io import numpy as np import matplotlib.pyplot as plt % matplotlib inline from collections import Counter # + id="MrSvjwDaQmdY" outputId="1a1d85a7-a677-4ddf-944d-20d76b81ba31" colab={"base_uri": "https://localhost:8080/", "height": 34} device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = torch.cuda.device_count() torch.cuda.get_device_name(0) # + id="gme_x2qjSonK" # !gdown --id 1H5YLUM-zsD7BbjMDO9iIJEjX5CQ-ols_ # !unzip OverAllDataset # !gdown --id 1ynEhV2TlJ_Ga4rQ0LjbU_WZdVlPB48Kv # !unzip sentences # + id="OQYbgtDdSoo-" outputId="b1f433ed-cd89-46c3-d34e-edba616007bb" colab={"base_uri": "https://localhost:8080/", "height": 51} def readLabels(): full_labels= [] with open( "OverAllDataset/Labels.txt" , encoding="cp1252") as content_file: for line in content_file: full_labels.append( int(line)) return full_labels def readData( ): full_date= [] for i in range(1, 1147): with open( "OverAllDataset/"+ str(i) + ".txt" , encoding="cp1252") as content_file: fileData= content_file.read() full_date.append(fileData ) return full_date data_X= readData() data_Y= readLabels() print(len(data_X)) print(len(data_Y)) # + id="eShw2OwbCWUQ" outputId="c13d279b-4f21-4711-aa6c-d6ae26984781" colab={"base_uri": "https://localhost:8080/", "height": 119} for k in range(999, 1005): print(data_Y[k]) # + id="UYL2tQQ6cgCW" # # !pip install bert-extractive-summarizer # + id="wV0D2rNWcgKj" outputId="0d409fcd-afb9-453d-cfd3-83d0fccf0b1f" colab={"base_uri": "https://localhost:8080/", "height": 164, "referenced_widgets": ["223e0097c2f14abb841f9f4794de67a5", "c03ba59230854849946dd7fdd56fc5f0", "2386e4a7f5984b5fa98df4c25b6a35b7", "f3a8cbc6d9e9411e97e11b25f04a5169", "731d158dac664c99a9c4a068f2595c3f", "1c01d10622ab4ba8a15d0979df636e49", "5392c5de45a24456ad36d112d4a235f4", "8af4e01588334ed1a50283affbb12aed", "1586448ec88446f5a6e4c92401fa869e", "ec6d783ee0e84be0ba48f2806dfa2cfa", "6c1085e702924bdc9c4bda41a4be2150", "0d35556960ac43559bcc2cde94ea7e99", "6cd3ee095f9f4596b9d09022c928b23b", "fa0e15e19a76432b9468dc9657b63425", "bc4aa2c1b9c6480ab48773065b021022", "6d8a5537b34b49faa8df22065e8b53cc", "c9ec823925fd4eb68acd37bfbb1244eb", "3dc584add55048ad84758137c68ffb7a", "e10b39be48b94bf78b91c4900f4e8d10", "8af0b73ea3ab44f9a43b5bd5c25978b7", "46d3905ff5a440e3947568f34ed73949", "9073c997125c4680ad8fb7a667ff818d", "dd5c8996510d4bfa97cfec441ffa61f4", "3017b447636542cea404442d5bb541e6"]} # from summarizer import Summarizer # model = Summarizer() # + id="LIeAztQccvxr" # sentences = [] # i=0 # for doc_text in data_X: # print(i) # i+=1 # sentences.append(model(doc_text , max_length= 300)) # + id="HBC8QKHMzhKd" def saveList(myList,filename): # the filename should mention the extension 'npy' np.save(filename,myList) print("Saved successfully!") def loadList(filename): # the filename should mention the extension 'npy' tempNumpyArray=np.load(filename) return tempNumpyArray.tolist() # + id="8k4erumYzhOx" # saveList(sentences,'sentences.npy') # + id="JLBHtIswztS2" sentences=loadList('sentences.npy') # + id="f7WRZ7TOdqvE" outputId="585480f3-1c0f-4ccf-b547-669a8b58ba3a" colab={"base_uri": "https://localhost:8080/", "height": 139} print(data_X[5]) print("=============================") print(sentences[5]) # + id="B_auhDwuZFVX" labels = data_Y # + id="dGy1oYorFVaW" outputId="3519619a-67ca-42ca-dbfc-b0607ab12e71" colab={"base_uri": "https://localhost:8080/", "height": 34} # get number of occurrences for each element Counter( data_Y ) # + id="6Imy9ozVZJEp" sentences = [sentence + " [SEP] [CLS]" for sentence in sentences] # + id="j-6M9xR5SkDB" outputId="b5ad595b-892e-4b24-f327-e9cc2d2879d7" colab={"base_uri": "https://localhost:8080/", "height": 34} tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased', do_lower_case=True) # tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased', do_lower_case=True) # + id="wsWD-orXZJHJ" outputId="472842e9-a4a2-4085-aa47-d78f4fb117e0" colab={"base_uri": "https://localhost:8080/", "height": 71} tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences] print ("Tokenize the first sentence:") print (tokenized_texts[0]) # + id="CqTqlCjNZJWb" # Set the maximum sequence length. The longest sequence in our training set is 47, but we'll leave room on the end anyway. MAX_LEN = 300 # Use the XLNet tokenizer to convert the tokens to their index numbers in the XLNet vocabulary input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts] # + id="pM_XW8atdnAq" # Pad our input tokens input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post") # + id="vNihGKYUdv_A" # Create attention masks attention_masks = [] # Create a mask of 1s for each token followed by 0s for padding for seq in input_ids: seq_mask = [float(i>0) for i in seq] attention_masks.append(seq_mask) # + id="Tb3oxT0Kd7KO" # Use train_test_split to split our data into train and validation sets for training train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(input_ids, labels, random_state=2018, test_size=0.2) train_masks, validation_masks, _, _ = train_test_split(attention_masks, input_ids, random_state=2018, test_size=0.2) # + id="AOO594AEeT0H" # Convert all of our data into torch tensors, the required datatype for our model train_inputs = torch.tensor(train_inputs) validation_inputs = torch.tensor(validation_inputs) train_labels = torch.tensor(train_labels) validation_labels = torch.tensor(validation_labels) train_masks = torch.tensor(train_masks) validation_masks = torch.tensor(validation_masks) # + id="cP4cr4GCebyV" # Select a batch size for training. For fine-tuning with XLNet, # the authors recommend a batch size of 32, 48, or 128. # We will use 32 here to avoid memory issues. batch_size = 16 # Create an iterator of our data with torch DataLoader. This helps save on memory during training because, unlike a for loop, # with an iterator the entire dataset does not need to be loaded into memory train_data = TensorDataset(train_inputs, train_masks, train_labels) train_sampler = RandomSampler(train_data) train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size) validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels) validation_sampler = SequentialSampler(validation_data) validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size) # + id="bEecji1QnHkX" outputId="e9188ec3-49af-4b2f-ed4d-f563d169ef5d" colab={"base_uri": "https://localhost:8080/", "height": 1000} # Load XLNEtForSequenceClassification, the pretrained XLNet model with a single linear classification layer on top. model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=6) # model = XLNetForSequenceClassification.from_pretrained("xlnet-large-cased", num_labels=6) model.cuda() # + id="DYJU9aWpnHoN" param_optimizer = list(model.named_parameters()) no_decay = ['bias', 'gamma', 'beta'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] # + id="3zsMbo2TnZTR" # This variable contains all of the hyperparemeter information our training loop needs optimizer = AdamW(optimizer_grouped_parameters, lr=2e-5) # + id="0z1YVIV_nZXE" # Function to calculate the accuracy of our predictions vs labels def flat_accuracy(preds, labels): pred_flat = np.argmax(preds, axis=1).flatten() labels_flat = labels.flatten() return np.sum(pred_flat == labels_flat) / len(labels_flat) # + id="jRRT4i1OoKBy" outputId="6352dbb3-6d32-451c-b0db-6367343e1075" colab={"base_uri": "https://localhost:8080/", "height": 170} # Store our loss and accuracy for plotting train_loss_set = [] # Number of training epochs (authors recommend between 2 and 4) epochs = 4 # trange is a tqdm wrapper around the normal python range for _ in trange(epochs, desc="Epoch"): # Training # Set our model to training mode (as opposed to evaluation mode) model.train() # Tracking variables tr_loss = 0 nb_tr_examples, nb_tr_steps = 0, 0 # Train the data for one epoch for step, batch in enumerate(train_dataloader): # Add batch to GPU batch = tuple(t.to(device) for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask, b_labels = batch # Clear out the gradients (by default they accumulate) optimizer.zero_grad() # Forward pass outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) loss = outputs[0] logits = outputs[1] train_loss_set.append(loss.item()) # Backward pass loss.backward() # Update parameters and take a step using the computed gradient optimizer.step() # Update tracking variables tr_loss += loss.item() nb_tr_examples += b_input_ids.size(0) nb_tr_steps += 1 print("Train loss: {}".format(tr_loss/nb_tr_steps)) # Validation # Put model in evaluation mode to evaluate loss on the validation set model.eval() # Tracking variables eval_loss, eval_accuracy = 0, 0 nb_eval_steps, nb_eval_examples = 0, 0 # Evaluate data for one epoch for batch in validation_dataloader: # Add batch to GPU batch = tuple(t.to(device) for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask, b_labels = batch # Telling the model not to compute or store gradients, saving memory and speeding up validation with torch.no_grad(): # Forward pass, calculate logit predictions output = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) logits = output[0] # Move logits and labels to CPU logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() tmp_eval_accuracy = flat_accuracy(logits, label_ids) eval_accuracy += tmp_eval_accuracy nb_eval_steps += 1 print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps)) # + id="iGyvEDMqoKFW" outputId="3df0c066-2c17-4ae1-b51f-9f41819e8b7b" colab={"base_uri": "https://localhost:8080/", "height": 513} # Let’s take a look at our training loss over all batches: plt.figure(figsize=(15,8)) plt.title("Training loss") plt.xlabel("Batch") plt.ylabel("Loss") plt.plot(train_loss_set) plt.show() # + id="lp0AkejWoKJd" # + id="DV6cmEOroKQY" # + id="RORDIpC6oKhL" # + id="pieOoCxkoJ-4" # + id="Y8C2-83voJ8M" 3; num_query = 100; for g in (ranked_users[0:num_user]): print g print '********************' rows_userg = [r for r in rows if r['stats.type']=='q' and r['sessionId']==g[0] and r.has_key('stats.status') ] print_count = 0 prev_nothing = False for r in rows_userg[-num_query:]: if prev_nothing or r['stats.status']=='Nothing': print_count = print_count + 1 if print_count>100: break print r['stats.status'] + ':\t' + r['q'].replace('(:q "','').replace('")','') prev_nothing = True if r['stats.status']=='Nothing' else False query_reformulation_by_user()
12,927
/IMDB-Moive-Beauty Score-Skin Brightness/Data transformation.ipynb
9790896719026379eae35d7d641f7ebe1da355fe
[]
no_license
Samxhuang52/Projects
https://github.com/Samxhuang52/Projects
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
32,259
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import pandas as pd import numpy as np from pandas import DataFrame, Series df=pd.read_csv('MergedWithBrightness5.26.csv') df.head() df.columns df['actor1SB'] df1 = pd.DataFrame() df1['actor1'] =df['actor1'] df1['actor2'] =df['actor2'] df1['actor3'] =df['actor3'] df1['actor1SB'] = df['actor1SB'] df1['actor2SB'] = df['actor2SB'] df1['actor3SB'] = df['actor3SB'] df2 = df1.drop_duplicates() df2.head() actor1=df2['actor1SB'] actor2=df2['actor2SB'] actor3=df2['actor3SB'] arr1 = actor1.values arr2 = actor2.values arr3 = actor3.values actor_all = pd.concat([df2['actor1SB'],df2['actor2SB'],df2['actor3SB']]) actor3 qcut = pd.qcut(actor_all,3,labels=['low','meidan','high']) qcut dfmap = pd.DataFrame({'SB':actor_all, 'range':qcut}) dfmap.to_excel('dfmap1.xlsx') a1 = df['actor1SB'] dict1 = Series(dfmap.range.values,index=dfmap.SB).to_dict() dict1 df['actor1range'] = df['actor1SB'] df.replace({'actor1range':dict1}) df['actor1range'] = wb.DataReader('CSCO', data_source='yahoo', start='2000-1-1') CSCO.head(5) CSCO['Adj Close'].plot(figsize=(8,5)) CSCO['log_return'] = np.log(CSCO['Adj Close'] / CSCO['Adj Close'].shift(1)) CSCO['log_return'].plot(figsize=(8,5)) avg_returns_d = CSCO['log_return'].mean() avg_returns_d avg_returns_a = avg_returns_d*250 avg_returns_a # Print the result in a presentable form. print((str(round(avg_returns_a, 4) * 100) + ' %')) onths']*100/demographics['Est: Total'] demographics['latino_rate'] = demographics['Estimate!!Total!!Hispanic or Latino']*100/demographics['Est: Total'] demographics['BL_rate'] = (demographics['Estimate!!Total!!Hispanic or Latino']+ demographics['Estimate!!Total!!Not Hispanic or Latino!!Black or African American alone'])*100/demographics['Est: Total'] demographics['BL_poor_rate'] = (demographics['Est: Poverty Status Past 12 Months (Black)'] + demographics['Est: Poverty Status Past 12 Months (Hispanic or Latino)'])*100/demographics['Est: Total'] demographics['Black_poor_rate'] = (demographics['Est: Poverty Status Past 12 Months (Black)'])*100/demographics['Est: Total'] demographics['Latino_poor_rate'] = (demographics['Est: Poverty Status Past 12 Months (Hispanic or Latino)'])*100/demographics['Est: Total'] demographics['White_poor_rate'] = (demographics['Est: Poverty Status Past 12 Months (White)'])*100/demographics['Est: Total'] # - demographics = demographics[~((demographics.index == 99)|(demographics.index==23))] arrests = arrests.drop(arrests.groupby('objectid').get_group(99).index) arrests = arrests.drop(arrests.groupby('objectid').get_group(23).index) #these two stations weren't near any people so we exclude them # + fig, ax = plt.subplots(1,1, figsize=(15,10)) ax.scatter(demographics['poverty_rate'],demographics['normed_arrests']) ax.set_ylabel('Arrest Rate per 100k Entries') ax.set_xlabel('Percent of surrounding area in poverty') m, b, r, p, err = stats.linregress(demographics['poverty_rate'], demographics['normed_arrests']) ax.plot(demographics['poverty_rate'], demographics['poverty_rate']*m + b, color='r', label='Lin fit R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['poverty_rate'], np.log(np.clip(demographics['normed_arrests'], 10e-20, 10e20))) ax.plot(demographics['poverty_rate'].sort_values(), np.exp(demographics['poverty_rate'].sort_values()*m +b), color='g', label='Exp fit R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['poverty_rate']**2, demographics['normed_arrests']) ax.plot(demographics['poverty_rate'].sort_values(), (demographics['poverty_rate'].sort_values()**2)*m +b, color='orange', label='Parabolic fit R-square: {0}\n p-value: {1}'.format(r**2,p)) ax.legend() fig.show() # + fig, ax = plt.subplots(1,1, figsize=(15,10)) ax.scatter(demographics['black_rate'],demographics['normed_arrests']) ax.set_ylabel('Arrest Rate per 100k Entries') ax.set_xlabel('Percent of surrounding area Black') m, b, r, p, err = stats.linregress(demographics['black_rate'], demographics['normed_arrests'], ) ax.plot(demographics['black_rate'], demographics['black_rate']*m + b, color='r', label='Lin fit R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['black_rate'], np.log(np.clip(demographics['normed_arrests'], 10e-20, 10e20))) ax.plot(demographics['black_rate'].sort_values(), np.exp(demographics['black_rate'].sort_values()*m +b), color='g', label='Exp fit R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['black_rate']**2, demographics['normed_arrests']) ax.plot(demographics['black_rate'].sort_values(), (demographics['black_rate'].sort_values()**2)*m +b, color='orange', label='Parabolic fit R-square: {0}\n p-value: {1}'.format(r**2,p)) ax.legend() fig.show() # + fig, ax = plt.subplots(1,1, figsize=(15,10)) ax.scatter(demographics['white_rate'],demographics['normed_arrests']) ax.set_ylabel('Arrest Rate per 100k Entries') ax.set_xlabel('Percent of surrounding area White') m, b, r, p, err = stats.linregress(demographics['white_rate'], demographics['normed_arrests'], ) ax.plot(demographics['white_rate'], demographics['white_rate']*m + b, color='r', label='Linear fit\n R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['white_rate'], np.log(np.clip(demographics['normed_arrests'], 10e-20, 10e20))) ax.plot(demographics['white_rate'].sort_values(), np.exp(demographics['white_rate'].sort_values()*m +b), color='g', label='Exponential fit\n R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['white_rate']**2, demographics['normed_arrests']) ax.plot(demographics['white_rate'].sort_values(), (demographics['white_rate'].sort_values()**2)*m +b, color='orange', label='Parabolic fit\n R-square: {0}\n p-value: {1}'.format(r**2,p)) ax.legend() fig.show() # + fig, ax = plt.subplots(1,1, figsize=(15,10)) ax.scatter(demographics['latino_rate'],demographics['normed_arrests']) ax.set_ylabel('Arrest Rate per 100k Entries') ax.set_xlabel('Percent of surrounding area Latino') m, b, r, p, err = stats.linregress(demographics['latino_rate'], demographics['normed_arrests'], ) ax.plot(demographics['latino_rate'], demographics['latino_rate']*m + b, color='r', label='Linear fit\n R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['latino_rate'], np.log(np.clip(demographics['normed_arrests'], 10e-20, 10e20))) ax.plot(demographics['latino_rate'].sort_values(), np.exp(demographics['latino_rate'].sort_values()*m +b), color='g', label='Exponential fit\n R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['latino_rate']**2, demographics['normed_arrests']) ax.plot(demographics['latino_rate'].sort_values(), (demographics['latino_rate'].sort_values()**2)*m +b, color='orange', label='Parabolic fit\n R-square: {0}\n p-value: {1}'.format(r**2,p)) ax.legend() fig.show() # + race_arrest_counts = arrests.groupby(['objectid', 'PERP_RACE']).agg({'ARREST_KEY':'count'}) race_arrest_pcts = race_arrest_counts.groupby(level=0).apply(lambda x:100 * x / float(x.sum())) blck_arrest_pcts = {} for i in demographics.index.unique(): try: blck_arrest_pcts[i] = race_arrest_pcts.loc[(i, 'BLACK')] except: blck_arrest_pcts[i] = 0 blck_arrest_pcts = pd.DataFrame(blck_arrest_pcts).T.rename({'ARREST_KEY':'Pct Black'}, axis=1).fillna(0.0) his_pcts = {} for i in demographics.index.unique(): try: his_pcts[i] = race_arrest_pcts.loc[(i, 'WHITE HISPANIC')] + race_arrest_pcts.loc[(i, 'BLACK HISPANIC')] except: his_pcts[i] = 0 his_pcts = pd.DataFrame(his_pcts).T.rename({'ARREST_KEY':'Pct Hispanic'}, axis=1).fillna(0.0) aapi_pcts = {} for i in demographics.index.unique(): try: aapi_pcts[i] = race_arrest_pcts.loc[(i, 'ASIAN / PACIFIC ISLANDER')] except: aapi_pcts[i] = 0 aapi_pcts = pd.DataFrame(aapi_pcts).T.rename({'ARREST_KEY':'Pct AAPI'}, axis=1).fillna(0.0) wht_pcts = {} for i in demographics.index.unique(): try: wht_pcts[i] = race_arrest_pcts.loc[(i, 'WHITE')] except: wht_pcts[i] = 0 wht_pcts = pd.DataFrame(wht_pcts).T.rename({'ARREST_KEY':'Pct White'}, axis=1).fillna(0.0) # + colors = ["#dd6e42","#e8dab2","#4f6d7a","#c0d6df", "#AAAE7F"] fig, ax = plt.subplots(1,1) blck_arrest_pcts.merge(demographics['black_rate'], left_index=True, right_index=True).plot(kind='scatter', x='black_rate', y='Pct Black', ax=ax, color = colors[2]) ax.set_ylabel('Percent of arrests Black') ax.set_xlabel('Percent of surrounding area Black') ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) line = mlines.Line2D([0, 1], [0, 1], color='red') transform = ax.transAxes line.set_transform(transform) ax.add_line(line) fig.savefig('Black arrests vs neighboorhood.png', bboxinches='tight') fig.show() # - fig, ax = plt.subplots(1,1) his_pcts.merge(demographics['latino_rate'], left_index=True, right_index=True).plot(kind='scatter',x='latino_rate', y='Pct Hispanic', ax=ax, color = colors[2]) ax.set_ylabel('Percent of Arrests Latino') ax.set_xlabel('Percent of surrounding area Latino') ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) line = mlines.Line2D([0, 1], [0, 1], color='red') transform = ax.transAxes line.set_transform(transform) ax.add_line(line) fig.savefig('Latino arrests vs neighboorhood.png', bboxinches='tight') fig.show() fig, ax = plt.subplots(1,1) wht_pcts.merge(demographics['white_rate'], left_index=True, right_index=True).plot(kind='scatter',x='white_rate', y='Pct White', ax=ax, color = colors[2]) ax.set_ylabel('Percent of Arrests White') ax.set_xlabel('Percent of surrounding area White') ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) line = mlines.Line2D([0, 1], [0, 1], color='red') transform = ax.transAxes line.set_transform(transform) ax.add_line(line) fig.savefig('White arrests vs neighboorhood.png', bboxinches='tight') fig.show() fig, ax = plt.subplots(1,1) aapi_pcts.merge(demographics['asian_rate'], left_index=True, right_index=True).plot(kind='scatter',x='asian_rate', y='Pct AAPI', ax=ax, color = colors[2]) ax.set_ylabel('Percent of Arrests Asian') ax.set_xlabel('Percent of surrounding area Asian') ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) line = mlines.Line2D([0, 1], [0, 1], color='red') transform = ax.transAxes line.set_transform(transform) ax.add_line(line) fig.savefig('Asian arrests vs neighboorhood.png', bboxinches='tight') fig.show() # + fig, ax = plt.subplots(1,1, figsize=(15,10)) ax.scatter(demographics['usage'], demographics['arrest_counts']) ax.set_ylabel('Arrests') ax.set_xlabel('Turnstile Entries') #ax.set_ylim((-1000, 1e4)) m, b, r, p, err = stats.linregress(demographics['usage'], demographics['arrest_counts'], ) ax.plot(demographics['usage'], demographics['usage']*m + b, color='r', label='Linear fit\n R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['usage'], np.log(np.clip(demographics['arrest_counts'], 10e-20, 10e20))) ax.plot(demographics['usage'].sort_values(), np.exp(demographics['usage'].sort_values()*m +b), color='g', label='Exponential fit\n R-square: {0}\n p-value: {1}'.format(r**2,p)) m, b, r, p, err = stats.linregress(demographics['usage']**2, demographics['arrest_counts']) ax.plot(demographics['usage'].sort_values(), (demographics['usage'].sort_values()**2)*m +b, color='orange', label='Parabolic fit\n R-square: {0}\n p-value: {1}'.format(r**2,p)) ax.legend() fig.show() # - model = ols('arrest_counts ~ black_rate + poverty_rate + latino_rate + Black_poor_rate + Latino_poor_rate', data=demographics).fit() # model.summary() modelnormed = ols('normed_arrests ~ white_rate + black_rate + poverty_rate + latino_rate + Black_poor_rate + Latino_poor_rate ', data=demographics).fit() modelnormed.summary(alpha=0.05) modelnormed.summary(alpha=0.05).as_csv() modelnormed.pvalues print(demographics['normed_arrests'].mean()) print('+/-', demographics['normed_arrests'].std()) print(0.1868*100/2.15061) print(0.1769*100/2.15061) print(-0.0285*100/2.15061) resid = demographics['normed_arrests'] - modelnormed.predict() plt.scatter(resid.index, resid) import scipy.stats as stats resid.sort_values(inplace=True) fit = stats.norm.pdf(resid, 0, np.std(resid)) plt.hist(resid, bins=50, normed=True) plt.plot(resid, fit) fig, ax = plt.subplots() stats.probplot(resid, plot=ax)
12,999
/02_Stemming.ipynb
df23f41871d40175c845a80062d40a3b378a103c
[]
no_license
AravindDhanasekaran/Natural-Language-Processing
https://github.com/AravindDhanasekaran/Natural-Language-Processing
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
17,062
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=true editable=true # # Shifting a Solution to the Fixed Frame of Reference # # We solve the problem in the shifted frame of reference. This means we can keep the time in the system separate from the distance L. If we want to look at how a field actually propagates in time, we need to shift back to a fixed or lab frame of reference. To do that we need to connect the time and space dimensions via the speed of light in the system. Here we'll demonstrate that with a two-level system with no interaction strength, so that the light pulse travels through unimpeded as in a vacuum. In the moving frame of reference, that means the pulse arrives at the same time as it departed, $t = 0$. In the fixed frame it will arrive some finite time later. # + [markdown] deletable=true editable=true # ## Define the Problem # + deletable=true editable=true mb_solve_json = """ { "ob_atom": { "decays": [ { "channels": [[0, 1]], "rate": 0.0 } ], "energies": [], "fields": [ { "coupled_levels": [[0, 1]], "detuning": 0.0, "detuning_positive": true, "label": "probe", "rabi_freq": 1.0e-3, "rabi_freq_t_args": { "ampl_1": 1.0, "centre_1": 0.0, "fwhm_1": 1.0 }, "rabi_freq_t_func": "gaussian_1" } ], "num_states": 2 }, "t_min": -2.0, "t_max": 10.0, "t_steps": 100, "z_min": -0.2, "z_max": 1.2, "z_steps": 140, "z_steps_inner": 1, "num_density_z_func": "square_1", "num_density_z_args": { "on_1": 0.0, "off_1": 1.0, "ampl_1": 1.0 }, "interaction_strengths": [ 0.0 ], "velocity_classes": { "thermal_delta_min": -0.0, "thermal_delta_max": 0.0, "thermal_delta_steps": 0, "thermal_delta_inner_min": 0.0, "thermal_delta_inner_max": 0.0, "thermal_delta_inner_steps": 0, "thermal_width": 1.0 }, "method": "mesolve", "opts": {}, "savefile": "mb-solve-fixed-frame" } """ # + deletable=true editable=true from maxwellbloch import mb_solve mb_solve_00 = mb_solve.MBSolve().from_json_str(mb_solve_json) # + [markdown] deletable=true editable=true # ## Solve the Problem # + deletable=true editable=true # %time Omegas_zt, states_zt = mb_solve_00.mbsolve(recalc=False) # + [markdown] deletable=true editable=true # ## Field Output # # In the moving frame of reference, we see that the pulse arrives at the back of the medium ($z = 1$) at the same time ($t = 0$) it left the front of the medium ($z = 0$). # # This of course is non-physical — the information in the pulse must take some finite time to cross the medium. # + deletable=true editable=true import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns import numpy as np fig = plt.figure(1, figsize=(16, 6)) ax = fig.add_subplot(111) cmap_range = np.linspace(0.0, 1.0e-3, 11) cf = ax.contourf(mb_solve_00.tlist, mb_solve_00.zlist, np.abs(mb_solve_00.Omegas_zt[0]/(2*np.pi)), cmap_range, cmap=plt.cm.Blues) ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)') ax.set_xlabel('Time ($1/\Gamma$)') ax.set_ylabel('Distance ($L$)') for y in [0.0, 1.0]: ax.axhline(y, c='grey', lw=1.0, ls='dotted') plt.colorbar(cf) # + [markdown] deletable=true editable=true # ## Shifting to the Fixed Frame # # To return the solution to a physical system, we must introduce a relation between the time and space dimensions in the problem. This is given by the speed of light in the chosen units. # # For example, if in our lab system the speed of light is such that in vacuum it will cover the length of the medium $1.0 L$ in $2.0 \tau$, then the speed of light must be $0.5 L/\tau$. # # Note that this is just a skewing of the results along the time axis to return it to the 'real' time. Nothing changes in the problem. # # We use the `fixed` module and get the fixed frame tlist with the call # # fixed.t_list(mb_solve_00, speed_of_light) # # and the field via # # fixed.rabi_freq_abs(mb_solve_00, 0, speed_of_light) # + deletable=true editable=true from maxwellbloch import fixed speed_of_light = 0.5 # [L Γ] tlist_fixed_frame = fixed.t_list(mb_solve_00, speed_of_light) field_fixed_frame = fixed.rabi_freq_abs(mb_solve_00, 0, speed_of_light, interp_kind='cubic') # + deletable=true editable=true fig = plt.figure(2, figsize=(16, 6)) ax = fig.add_subplot(111) cmap_range = np.linspace(0.0, 1.0e-3, 11) cf = ax.contourf(tlist_fixed_frame, mb_solve_00.zlist, np.abs(field_fixed_frame/(2*np.pi)), cmap_range, cmap=plt.cm.Blues) ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)') ax.set_xlabel('Fixed Time $1/\Gamma$)') ax.set_ylabel('Distance ($L$)') for x in [0.0, 1/speed_of_light]: ax.axvline(x, c='red', lw=1.0, ls='dashed') for y in [0.0, 1.0]: ax.axhline(y, c='grey', lw=1.0, ls='dotted') plt.colorbar(cf) # + [markdown] deletable=true editable=true # And we can change the speed of light if we like. # # If in our lab system the speed of light is such that in vacuum it will cover the length of the medium $1.0 L$ in $5.0 \tau$, then the speed of light must be $0.2 L/\tau$. # + deletable=true editable=true speed_of_light = 0.2 # [L Γ] tlist_fixed_frame = fixed.t_list(mb_solve_00, speed_of_light) field_fixed_frame = fixed.rabi_freq_abs(mb_solve_00, 0, speed_of_light, interp_kind='cubic') # + deletable=true editable=true fig = plt.figure(2, figsize=(16, 6)) ax = fig.add_subplot(111) cmap_range = np.linspace(0.0, 1.0e-3, 11) cf = ax.contourf(tlist_fixed_frame, mb_solve_00.zlist, np.abs(field_fixed_frame/(2*np.pi)), cmap_range, cmap=plt.cm.Blues) ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)') ax.set_xlabel('Fixed Time $1/\Gamma$)') ax.set_ylabel('Distance ($L$)') for x in [0.0, 1/speed_of_light]: ax.axvline(x, c='red', lw=1.0, ls='dashed') for y in [0.0, 1.0]: ax.axhline(y, c='grey', lw=1.0, ls='dotted') plt.colorbar(cf) plt.savefig('images/mb-solve-fixed-frame.png') # + [markdown] deletable=true editable=true # The field must be interpolated to find the value at the shifted time points. We can choose the kind of spline interpolation with `interp_kind` which may be `'linear'`, `'cubic'` or `'quintic'`. It is linear by default.
6,565
/ch08/udfs.ipynb
9b79e6abf2511d9bb3815b6347e6794adf74faa8
[]
no_license
benlorence/python-for-excel
https://github.com/benlorence/python-for-excel
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
3,095
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # User-Defined Functions (UDFs) # ## Function Decorators # This is the definition of the function decorator def verbose(func): def wrapper(): print("Before calling the function.") func() print("After calling the function.") return wrapper # Using a function decorator @verbose def print_hello(): print("hello!") # Effect of calling the decorated function print_hello() # ## Fetching Data from Google Trends from pytrends.request import TrendReq # First, let's instantiate a TrendRequest object trend = TrendReq() # Now we can print the suggestions as they would appear # online in the dropdown of Google Trends after typing in "Python" suggestions = trend.suggestions('Python') for suggestion in suggestions: print(suggestion) # ## Caching # + import time cache = {} def slow_sum(a, b): key = (a, b) if key in cache: return cache[key] else: time.sleep(2) # sleep for 2 seconds result = a + b cache[key] = result return result # - # %%time slow_sum(1, 2) # %%time slow_sum(1, 2)
1,357
/notebooks/Section1-Basics/1.BasicSpark/5.Word Count.ipynb
0d27aa4cd3dfbf489cc176250a570cba09612d65
[]
no_license
fliphilipp/Public-DSC291
https://github.com/fliphilipp/Public-DSC291
0
2
null
2020-04-14T03:48:11
2020-04-08T00:15:38
Jupyter Notebook
Jupyter Notebook
false
false
.py
33,430
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Importing Libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import * import scikitplot as skplt import plotly.express as px import plotly.graph_objs as go import cufflinks as cf from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot init_notebook_mode(connected=True) cf.go_offline() # Ignore warnings import warnings warnings.filterwarnings('ignore') # Set styling sns.set(style= "ticks") plt.style.use("fivethirtyeight") # - #loading the data df_credit = pd.read_csv("Data/creditcard.csv") df_credit #looking the type and searching for null values df_credit.info() df_credit.isnull().sum() df_credit.describe().T # + # Creating Bar chart of claim counts temp = df_credit['Class'].value_counts()/len(df_credit) fig = px.bar(x= ['Normal', 'Fraud'], y= df_credit['Class'].value_counts(),\ template= 'presentation', text= [f'{round(temp[0]*100, 2)}%', f'{round(temp[1]*100, 2)}%'],\ title= 'Claim Counts', height= 400, width = 500) fig.update_traces(marker= dict(color= ['rgb(250, 98, 89)', 'rgb(95, 111, 176)'],\ line=dict(color='#000000', width=2), opacity= 0.8) ) # fig.show() # - timedelta = pd.to_timedelta(df_credit['Time'], unit='s') df_credit['Time_min'] = (timedelta.dt.components.minutes).astype(int) df_credit['Time_hour'] = (timedelta.dt.components.hours).astype(int) timedelta #Exploring the distribuition by Class types throught hours and minutes plt.figure(figsize=(12,5)) sns.distplot(df_credit[df_credit['Class'] == 0]["Time_hour"], color='g') sns.distplot(df_credit[df_credit['Class'] == 1]["Time_hour"], color='r') plt.title('Fraud vs Normal Transactions by Hours') plt.xlim([-1,25]) plt.show() #Exploring the distribuition by Class types throught hours and minutes plt.figure(figsize=(12,5)) sns.distplot(df_credit[df_credit['Class'] == 0]["Time_min"], color='g') sns.distplot(df_credit[df_credit['Class'] == 1]["Time_min"], color='r') plt.title('Fraud x Normal Transactions by minutes', fontsize=17) plt.xlim([-1,61]) plt.show() # + #To clearly the data of frauds and no frauds df_fraud = df_credit[df_credit['Class'] == 1] df_normal = df_credit[df_credit['Class'] == 0] print("Fraud transaction statistics") df_fraud[["Amount"]].describe().T # - print("\nNormal transaction statistics") df_normal[["Amount"]].describe().T df_credit #Feature engineering to a better visualization of the values df_credit['Amount_log'] = np.log(df_credit['Amount'] + 0.01) df_credit[['Amount_log', 'Amount']] print("Fraud transaction statistics") df_credit[df_credit['Class'] == 1][["Amount_log"]].describe().T print("\nNormal transaction statistics") df_credit[df_credit['Class'] == 0][["Amount_log"]].describe().T # + # Exploring the Amount by Class and see the distribuition of Amount transactions fig, ax= plt.subplots(1, 2, figsize= (14, 6)) sns.boxplot(x ="Class", y="Amount", data=df_credit, ax=ax[0]) ax[0].set_title("Class x Amount") ax[0].set_xlabel("Is Fraud?") ax[0].set_ylabel("Amount (US)") # ax[0].grid(b=None) sns.boxplot(x ="Class", y="Amount_log", data=df_credit, ax=ax[1]) ax[1].set_title('Class x Amount (Log)') ax[1].set_xlabel("Is Fraud?") ax[1].set_ylabel("Amount (Log)") # ax[1].grid(b=None) plt.show() # - #Looking the Amount and time distribuition of FRAUD transactions sns.lmplot(y="Amount", x="Time_min", fit_reg=False, aspect=1.8, data=df_credit, hue='Class') plt.title("Amounts by Minutes of Frauds and Normal Transactions",fontsize=16) plt.show() # + sns.lmplot(y="Amount", x="Time_hour", fit_reg=False,aspect=1.8, data=df_credit, hue='Class') plt.title("Amounts by Hour of Frauds and Normal Transactions", fontsize=16) plt.show() # + from matplotlib.gridspec import GridSpec #Looking the V's features columns = df_credit.iloc[:,1:29].columns frauds = df_credit.Class == 1 normals = df_credit.Class == 0 grid = GridSpec(7, 4) plt.figure(figsize=(18,20*2)) for n, col in enumerate(df_credit[columns]): ax = plt.subplot(grid[n]) sns.distplot(df_credit[col][frauds], bins = 50, color='g') #Will receive the "semi-salmon" violin sns.distplot(df_credit[col][normals], bins = 50, color='r') #Will receive the "ocean" color ax.set_ylabel('Density') ax.set_title(str(col)) ax.set_xlabel('') plt.show() # - # Half heatmap corr = df_credit.corr() mask = np.zeros_like(corr) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): f, ax = plt.subplots(figsize=(18, 18)) ax = sns.heatmap(corr, mask= mask, square= True, annot= True, fmt= '0.2f',\ linewidths= .8, cmap= "RdYlGn_r") X = df_credit.drop(["Class"], axis=1).values #Setting the X to do the split y = df_credit["Class"].values # transforming the values in array from sklearn.model_selection import train_test_split # Train Test Split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.2, random_state= 2, stratify= y) # + from imblearn.pipeline import make_pipeline as make_pipeline_imb # To do our transformation in a unique time from imblearn.over_sampling import SMOTE from sklearn.ensemble import RandomForestClassifier classifier = RandomForestClassifier # build model with SMOTE imblearn smote_pipeline = make_pipeline_imb(SMOTE(random_state=4), \ classifier(random_state=42, n_jobs=-1)) smote_model = smote_pipeline.fit(X_train, y_train) smote_prediction = smote_model.predict(X_test) # + from collections import Counter #Showing the diference before and after the transformation used print("normal data distribution: {}".format(Counter(y))) X_smote, y_smote = SMOTE().fit_sample(X_train, y_train) print("SMOTE data distribution: {}".format(Counter(y_smote))) # - # plot confusion matrix for RF classifier skplt.metrics.plot_confusion_matrix(y_test, smote_prediction, title="Confusion Matrix for RF Classifier") plt.show() def print_results(y_test, y_pred, model='Results'): from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, classification_report print(f"{model} Classifier") print(f"\nAccuracy is {round(accuracy_score(y_test, y_pred), 7)}") print(f"\nPrecision-score is {round(precision_score(y_test, y_pred), 2)}") print(f"\nRecall-score is {round(recall_score(y_test, y_pred), 2)}") print(f"\nF1-score is {round(f1_score(y_test, y_pred), 2)}") print(f"\nclassification Report:\n {classification_report(y_test, y_pred)}\n") skplt.metrics.plot_confusion_matrix(y_test, y_pred, title=f"Confusion Matrix for {model} Classifier") print_results(y_test, smote_prediction, model= 'RF') from sklearn.metrics import * fpr, tpr, thresholds = roc_curve(y_test, smote_prediction) # Plot ROC curve plt.plot(fpr, tpr) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision Recall Curve') plt.show() # + smote_prediction_prob = smote_pipeline.predict_proba(X_test)[:,1] # Generate precision recall curve values: precision, recall, thresholds precision, recall, thresholds = precision_recall_curve(y_test, smote_prediction_prob) # Plot ROC curve plt.plot(precision, recall) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision Recall Curve') plt.show() # + #SMOTE from imblearn.over_sampling import SMOTE, ADASYN X_train_res, y_train_res = SMOTE().fit_resample(X_train, y_train) print(f'Before OverSampling, label 1: {sum(y == 1)}') print(f'Before OverSampling, label 0: {sum(y == 0)}\n') print(f'After OverSampling, the shape of X_train: {X_train_res.shape}') print(f'After OverSampling, the shape of y_train: {y_train_res.shape}\n') print(f'After OverSampling, label 1: {sum(y_train_res == 1)}') print(f'After OverSampling, label 0: {sum(y_train_res == 0)}') # - #Model model = RandomForestClassifier(random_state=42, n_jobs=-1) model.fit(X_train_res, y_train_res) y_pred = model.predict(X_test) print_results(y_test, y_pred, model= 'RF') # + from sklearn.model_selection import KFold, cross_val_score, GridSearchCV #params of the model param_grid = {"max_depth": [3,5, None], "n_estimators":[3,5,10], "max_features": [5,6,7,8]} # Creating the classifier rf = RandomForestClassifier(random_state=3, n_jobs=-1) grid_search_rf = GridSearchCV(rf, param_grid=param_grid, cv=5, scoring='recall') # - grid_search_rf.fit(X_train_res, y_train_res) grid_search_rf.best_params_ # Running the fit rf_tuned = RandomForestClassifier(max_depth=None, max_features=5, n_estimators=10, n_jobs=-1) rf_tuned.fit(X_train_res, y_train_res) y_pred_rf = rf_tuned.predict(X_test) print_results(y_test, y_pred_rf, model= 'RF') features = list(df_credit.drop('Class', axis=1).columns) skplt.estimators.plot_feature_importances(rf_tuned, feature_names=features,\ figsize= (15, 8), title= "Feature Importance for RF Classifier", x_tick_rotation= 90) # + #Predicting proba y_pred_rf_prob = rf_tuned.predict_proba(X_test)[:,1] # skplt.metrics.plot_precision_recall_curve(y_test, y_pred_rf_prob) # Generate precision recall curve values: precision, recall, thresholds precision, recall, thresholds = precision_recall_curve(y_test, y_pred_rf_prob) # Plot ROC curve plt.plot(precision, recall) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision Recall Curve') plt.show() # - y_pred_rf = rf_tuned.predict(X_test) print_results(y_test, y_pred_rf, model= 'RF')
9,802
/Tray.ipynb
e77ba381773ab9ed172333029ac3986a27565217
[]
no_license
Tsendee0502/K-Means-Image-Segmentation
https://github.com/Tsendee0502/K-Means-Image-Segmentation
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
241,759
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Overview # # This notebook will show you how to create and query a table or DataFrame that you uploaded to DBFS. [DBFS](https://docs.databricks.com/user-guide/dbfs-databricks-file-system.html) is a Databricks File System that allows you to store data for querying inside of Databricks. This notebook assumes that you have a file already inside of DBFS that you would like to read from. # # This notebook is written in **Python** so the default cell type is Python. However, you can use different languages by using the `%LANGUAGE` syntax. Python, Scala, SQL, and R are all supported. # + # File location and type # BDM -- changed to part_*_ as I have loaded parquet files 0000-0004 -- more data -- better lda -- I hope file_location = "/FileStore/tables/part_r_*_fbc86a65_*.parquet" file_type = "parquet" # CSV options infer_schema = "false" first_row_is_header = "false" delimiter = "," # The applied options are for CSV files. For other file types, these will be ignored. df = spark.read.format(file_type) \ .option("inferSchema", infer_schema) \ .option("header", first_row_is_header) \ .option("sep", delimiter) \ .load(file_location) display(df) # + # Create a view or table # Change the subject text to lower case and the timestamp to a date whereby we drop the hours and minutes such # that the format of the timestamp data is now 'YYYY-MM-DD' -- This could be an issue as we lost hh-mm but it was only way # for me to get generic filter my dataframe function working to get emails by date --YYYY-MM-DD # May have to revisit # from pyspark.sql.functions import lower, col, date_format #from pyspark.sql.functions import * # df = df.select("mid","sender", date_format(col("timestamp"),'yyyy-MM-dd').alias("timestamp"), \ "omid",lower(col('subject')).alias('subject'), "body", "folder","body_cleaned", "concepts") temp_table_name = 'df' df.createOrReplaceTempView('df') #display(df) df.count() # + # %sql /* Query the created temp table in a SQL cell Left here as an example of code syntax */ select count(*) from `df` # + # df1 and df2 are temp dataframes containing subject with replies and forwards as it was thought # that it might be a good idea to create a dataframe where we eliminate those emails # no_for_or_replies is the final df that eliminated the forwards and replies df1 = df.filter(df['subject'].startswith("re:")) df2 = df.filter(df['subject'].startswith("fw:")) df1.createOrReplaceTempView('df1') df2.createOrReplaceTempView('df2') no_for_or_replies_df = spark.sql("select * from df minus (select * from df1 union select * from df2)") display(no_for_or_replies_df) no_for_or_replies_df.createOrReplaceTempView('no_for_or_replies_df') # + # %sql select count(*) from no_for_or_replies_df # - # This creates a dataframe (only_bigrams_df) from no_for_or_replies_df above where we reduce further to eliminate any emails that do not have a bigram from # the concepts column import pyspark.sql.functions as F only_bigrams_df=spark.sql("SELECT * from no_for_or_replies_df").filter(F.size('concepts') > 0) only_bigrams_df.createOrReplaceTempView('only_bigrams_df') # %sql select count(*) from only_bigrams_df # Below is a function that is meant to clean the enron email data from text that is not useful to summarization -- thanks Marta # I attempted to remove all puncuation except for periods but it wasnt working -- Revisit # We should all add to clean function as source to pre-process data import re, html, string rem = ['(?s)<TYPE>GRAPHIC.*?</TEXT>', '(?s)<TYPE>EXCEL.*?</TEXT> ', '(?s)<TYPE>PDF.*?</TEXT>', '(?s)<TYPE>ZIP.*?</TEXT>', '(?s)<TYPE>COVER.*?</TEXT>', '(?s)<TYPE>CORRESP.*?</TEXT>', '(?s)<TYPE>EX-10[01].INS.*?</TEXT>', '(?s)<TYPE>EX-99.SDR [KL].INS.*?</TEXT>', '(?s)<TYPE>EX-10[01].SCH.*?</TEXT>', '(?s)<TYPE>EX-99.SDR [KL].SCH.*?</TEXT>', '(?s)<TYPE>EX-10[01].CAL.*?</TEXT>', '(?s)<TYPE>EX-99.SDR [KL].CAL.*?</TEXT>', '(?s)<TYPE>EX-10[01].DEF.*?</TEXT>', '(?s)<TYPE>EX-99.SDR [KL].LAB.*?</TEXT>', '(?s)<TYPE>EX-10[01].LAB.*?</TEXT>', '(?s)<TYPE>EX-99.SDR [KL].LAB.*?</TEXT>', '(?s)<TYPE>EX-10[01].PRE.*?</TEXT>', '(?s)<TYPE>EX-99.SDR [KL].PRE.*?</TEXT>', '(?s)<TYPE>EX-10[01].REF.*?</TEXT>', '(?s)<TYPE>XML.*?</TEXT>', '<TYPE>.*', '<SEQUENCE>.*', '<FILENAME>.*', '<DESCRIPTION>.*', '(?s)(?i)<Head>.*?</Head>', '(?s)(?i)<Table.*?</Table>', '(?s)<[^>]*>'] # def clean(txt): # txt=txt.replace (".",". ") doc = re.sub("\xa0|\n|\t|—|_"," ",html.unescape(txt)) remove = string.punctuation remove = remove.replace(".", "") # don't remove periods pattern = r"[{}]".format(remove) # create the pattern # bdm_doc=re.sub(pattern, "", txt) # return re.sub("(?s) +"," ",re.sub(rem[-1]," ",bdm_doc)) return re.sub("(?s) +"," ",re.sub(rem[-1]," ",doc)) # def add_space(s): res = re.sub('\s+$', '', re.sub('\s+', ' ', re.sub('\.', '. ', s))) if res[-1] != '.': res += '.' return res def remove_punct(s): remove=string.punctuation remove=remove+'\0123456789[]' s.translate(None, remove) #def remove_punct(s): # remove = string.punctuation # pattern = r"[{}]".format(remove) # create the pattern # return re.sub(pattern, "", s) # + #Generic function to filter a dataframe based on passing a dictionary #consisting of key - pairs as {column_name:column_value} #This functions also requires that the name of the dataframe is passed in as well #Could not get sql to be truly dynamic due to the sql with argument requiring a select_string.format syntax as the .format part #is not allowed to be a string (Looked up eval, exec, set_attrib, etc) None of these would work. #This function requires a temporary table as the dataframe and can only have a max of 3 items in dictionary and thus, #only 3 values chanied and filtered in the where clause. def filter_my_dataframe (my_dict, my_df): # my_counter=0 my_query='select * from ' + my_df + ' where ' # # Build the select and where clause string # for my_key in test_dict.keys(): my_counter+=1 if my_counter==1: my_query=my_query+my_key+'="{}"' else: my_query=my_query+' and '+my_key+'="{}"' # format_str='.format(' my_counter=0 # # Determine the number of passed values -- max is 3 # for my_value in test_dict.values(): my_counter+=1 if my_counter==1: my_first_val=my_value else: if my_counter==2: my_second_val=my_value else: if my_counter==3: my_third_val=my_value # # Build the format that requires appending to the select/where string as 'sql_string'.format # if my_counter==1: my_query=my_query.format(my_first_val) if my_counter==2: my_query=my_query.format(my_first_val, my_second_val) if my_counter==3: my_query=my_query.format(my_first_val,my_second_val, my_third_val) # return (sqlContext.sql(my_query)) # - # The nltk_text_summarization function was written from code set in article (https://stackabuse.com/text-summarization-with-nltk-in-python/) # This function will return n sentences that are weighted with frequently used words for each email as an rdd/dataframe to be provided as an argument to the function # import nltk import heapq from nltk.tokenize import sent_tokenize,word_tokenize nltk.download('punkt') nltk.download('stopwords') def nltk_text_summarization(my_email, my_num_of_summary_sentences): try: # # Use nltk to tokenize the sentences. Initialize the stop words and dictionat for word_frequencies # sentence_list = nltk.sent_tokenize(my_email) stopwords = nltk.corpus.stopwords.words('english') word_frequencies = {} # # Load the words from email into dictionary and initialize to 1 and keep running count as value # for word in nltk.word_tokenize(my_email): if word not in stopwords: if word not in word_frequencies.keys(): word_frequencies[word] = 1 else: word_frequencies[word] += 1 # # weight the word counts by dividing by the maximum word count found in the email # maximum_frequency = max(word_frequencies.values()) for word in word_frequencies.keys(): word_frequencies[word] = (word_frequencies[word]/maximum_frequency) # # Find the sentences and add word frequency weights to the sentences in a dictionary structure. # This will identify the sentences that have the highest word weights that will be used for the text summary # sentence_scores = {} for sent in sentence_list: for word in nltk.word_tokenize(sent.lower()): if word in word_frequencies.keys(): if len(sent.split(' ')) < 30: if sent not in sentence_scores.keys(): sentence_scores[sent] = word_frequencies[word] else: sentence_scores[sent] += word_frequencies[word] # # Store the count of the number of sentences in the email # If the sentence length is < 3 , then use all sentences for the text summary # Else capture the top N sentences specified in the call to function to be used for the text summary # the_length=len(sentence_scores) if the_length < my_num_of_summary_sentences: summary_sentences = heapq.nlargest(the_length, sentence_scores, key=sentence_scores.get) else: summary_sentences = heapq.nlargest(my_num_of_summary_sentences, sentence_scores, key=sentence_scores.get) my_summary = ' '.join(summary_sentences) return(my_summary) except: pass num_of_summary_sentences=3 my_nltk_df = no_for_or_replies_df.select('mid','body').rdd.map(lambda x:(x[0],nltk_text_summarization(clean(str(x[1]).replace("."," ")),num_of_summary_sentences))) \ .filter(lambda x: x[1]).filter(lambda x: x[1] is not None) #big_test.collect() # .filter(lambda x: x).filter(lambda x: x is not None) my_nltk_df1 = my_nltk_df.toDF(['mid','summary']) my_nltk_df1.createOrReplaceTempView('my_nltk_df1') num_of_summary_sentences=3 my_nltk_df = no_for_or_replies_df.select('mid','body').rdd.map(lambda x:(x[0],nltk_text_summarization(clean(str(x[1])),num_of_summary_sentences))) \ .filter(lambda x: x[1]).filter(lambda x: x[1] is not None) #big_test.collect() # .filter(lambda x: x).filter(lambda x: x is not None) my_nltk_df1 = my_nltk_df.toDF(['mid','summary']) my_nltk_df1.createOrReplaceTempView('my_nltk_df1') # %sql select * from my_nltk_df1 # Below shows a test of a generic function created to filter a dataframe. # The function filter_my_dataframe can be used to compund your where clause via a dictionary you create for your columns in the where clause # The function dynamically builds the where clause via parameters passed via dictionary for where clause. The second parameter is the name of the dataframe # Belows test grabs an email via message id , mid, but can also be used to grab a certain date as well as a spcific sender # #test_dict={'mid':404632} test_dict={'mid':68633} #test_dict={'sender':'[email protected]'} test_df='df' test_a_row=filter_my_dataframe(test_dict,test_df) #type(test_a_row) t=test_a_row.select('body').collect() #t=test_a_row.select 'body'.rdd #mi=nltk_text_summarization(clean(str(t))) #my_weight_frequency(mi) #nltk_text_summarization(clean(str(t))) sum_sentences=nltk_text_summarization(clean(str(t)),3) print(sum_sentences) # %sql select body from no_for_or_replies_df where mid=68633 # + from gensim.summarization.summarizer import summarize import re import nltk from nltk.tokenize import sent_tokenize,word_tokenize def short(sent): try: return summarize(sent) except: pass # - body_clean = no_for_or_replies_df.select('mid','body') \ .rdd.map(lambda x:(x[0],short(re.sub(r'[^a-zA-Z\s\.]', ' ',clean(str(x[1])))))) \ .filter(lambda x: x[1]).filter(lambda x: x[1] is not None) body_clean1 = body_clean.toDF(['mid','summary']) body_clean1.createOrReplaceTempView('body_clean1') # + # %sql SELECT * FROM body_clean1 where summary <> ""; # - df_lda = spark.sql("select a.*, b.summary from no_for_or_replies_df a, body_clean1 b \ where a.mid=b.mid and b.summary<>\'\'").rdd df_lda.take(5) df_lda = spark.sql("SELECT * FROM body_clean1 where summary <> \'\'").rdd.map(lambda x: (x[1],x[0])) print(type(df_lda)) df_lda.collect() # %sql select count(*) from body_clean1 where summary is null # %sql select a.*, b.summary as gensim_summary, c.summary as nltk_summary from no_for_or_replies_df a, body_clean1 b, my_nltk_df1 c where a.mid=b.mid and a.mid=c.mid import gensim from gensim.utils import simple_preprocess from gensim.parsing.preprocessing import STOPWORDS from gensim.parsing.porter import PorterStemmer from nltk.stem import WordNetLemmatizer, SnowballStemmer from nltk.stem.porter import * import numpy as np np.random.seed(2018) import nltk nltk.download('wordnet') def lemmatize_stemming(text): stemmer = PorterStemmer() return stemmer.stem(WordNetLemmatizer().lemmatize(text, pos='v')) def lemstem_preprocess(text): result = [] for token in gensim.utils.simple_preprocess(text): if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3: result.append(lemmatize_stemming(token)) return result # + df.count() test_dict={'mid':68633} test_df='df' test_a_row=filter_my_dataframe(test_dict,test_df) #t=test_a_row.select('body','mid').collect() #t=test_a_row.select('body') #t=test_a_row.select('body','mid') #type(t) #print(t) #t=no_for_or_replies_df.select ('body','mid').collect() t=spark.sql ("select body from no_for_or_replies_df") #type(t.collect()) #type(t.take(1)) processed_email =t.rdd.flatMap(lambda x:lemstem_preprocess(clean(str(x)))).collect() #processed_email =t.rdd.flatMap(lambda x:(lemstem_preprocess(clean(str(x[1]))),x[0])).collect() dictionary = gensim.corpora.Dictionary(processed_email) processed_email.take(5) # + #test_dict={'mid':68633} #test_df='df' #test_a_row=filter_my_dataframe(test_dict,test_df) #t=test_a_row.select('body','mid').collect() #t=test_a_row.select('body','mid') #type(t) #print(t) t = df.select('body_cleaned').rdd.map(lambda x:x.asList()).take(5) print(t) # + t=spark.sql ("select body_clean from df").collect() stemmer = PorterStemmer() process_email= words = [] for word in clean(str(t[0])).split(' '): words.append(word) print(words) for token in gensim.utils.simple_preprocess(clean(str(t[0]))): print(token) print(lemmatize_stemming(token)) from pyspark.sql import SQLContext, Row from pyspark.ml.feature import CountVectorizer from pyspark.mllib.clustering import LDA, LDAModel from pyspark.mllib.linalg import Vector, Vectors # Loads data. data = df_lda.rdd.map(lambda (words,idd): Row(idd = idd, words = words.split(" "))) #data.count() docDF = spark.createDataFrame(data) Vector = CountVectorizer(inputCol="words", outputCol="vectors") model = Vector.fit(docDF) result = model.transform(docDF) corpus = result.select("idd", "vectors").rdd.map(lambda x,y: [x,Vectors.fromML(y)]).cache() # Cluster the documents into three topics using LDA ldaModel = LDA.train(corpus, k=3,maxIterations=100,optimizer='online') topics = ldaModel.topicsMatrix() vocabArray = model.vocabulary wordNumbers = 10 # number of words per topic topicIndices = sc.parallelize(ldaModel.describeTopics(maxTermsPerTopic = wordNumbers)) def topic_render(topic): # specify vector id of words to actual words terms = topic[0] result = [] for i in range(wordNumbers): term = vocabArray[terms[i]] result.append(term) return result topics_final = topicIndices.map(lambda topic: topic_render(topic)).collect() for topic in range(len(topics_final)): print ("Topic" + str(topic) + ":") for term in topics_final[topic]: print (term) print ('\n') # + from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator # Loads data. dataset = spark.read.format("libsvm").load("data/mllib/sample_kmeans_data.txt") # Trains a k-means model. kmeans = KMeans().setK(2).setSeed(1) model = kmeans.fit(dataset) # Make predictions predictions = model.transform(dataset) # Evaluate clustering by computing Silhouette score evaluator = ClusteringEvaluator() silhouette = evaluator.evaluate(predictions) print("Silhouette with squared euclidean distance = " + str(silhouette)) # Shows the result. centers = model.clusterCenters() print("Cluster Centers: ") for center in centers: print(center)
16,526
/code/LSTM.ipynb
07d67fccb6870fb7c0a04c0362a28073269c4620
[]
no_license
swakv/Multi-Lingual-Text-Emotion-Recognition
https://github.com/swakv/Multi-Lingual-Text-Emotion-Recognition
1
0
null
null
null
null
Jupyter Notebook
false
false
.py
10,846
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from tqdm import tqdm from sklearn.model_selection import train_test_split import tensorflow as tf from keras.models import Sequential from keras.layers.recurrent import LSTM, GRU,SimpleRNN from keras.layers.core import Dense, Activation, Dropout from keras.layers.embeddings import Embedding from keras.layers.normalization import BatchNormalization from keras.utils import np_utils from sklearn import preprocessing, decomposition, model_selection, metrics, pipeline from keras.layers import GlobalMaxPooling1D, Conv1D, MaxPooling1D, Flatten, Bidirectional, SpatialDropout1D from keras.preprocessing import sequence, text from keras.callbacks import EarlyStopping # + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" # Detect hardware, return appropriate distribution strategy try: # TPU detection. No parameters necessary if TPU_NAME environment variable is # set: this is always the case on Kaggle. tpu = tf.distribute.cluster_resolver.TPUClusterResolver() print('Running on TPU ', tpu.master()) except ValueError: tpu = None if tpu: tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu) strategy = tf.distribute.experimental.TPUStrategy(tpu) else: # Default distribution strategy in Tensorflow. Works on CPU and single GPU. strategy = tf.distribute.get_strategy() print("REPLICAS: ", strategy.num_replicas_in_sync) # - train = pd.read_csv('../dataset/train_modified.csv') valid = pd.read_csv('../dataset/val_modified.csv') test = pd.read_csv('../dataset/test_modified.csv') train # + #IMP DATA FOR CONFIG AUTO = tf.data.experimental.AUTOTUNE # Configuration EPOCHS = 3 BATCH_SIZE = 16 * strategy.num_replicas_in_sync MAX_LEN = 168 # + xtrain, ytrain = train.Comment.values, train.Emotion.values xvalid, yvalid = valid.Comment.values, valid.Emotion.values xtest, ytest = test.Comment.values, test.Emotion.values train['Comment'].apply(lambda x:len(str(x).split())).max() # + # using keras tokenizer here token = text.Tokenizer(num_words=None) max_len = 168 token.fit_on_texts(list(xtrain) + list(xvalid)) xtrain_seq = token.texts_to_sequences(xtrain) xvalid_seq = token.texts_to_sequences(xvalid) xtest_seq = token.texts_to_sequences(xtest) #zero pad the sequences xtrain_pad = sequence.pad_sequences(xtrain_seq, maxlen=max_len) xvalid_pad = sequence.pad_sequences(xvalid_seq, maxlen=max_len) xtest_pad = sequence.pad_sequences(xtest_seq, maxlen=max_len) word_index = token.word_index # - metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)] loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) # + with strategy.scope(): # A simpleRNN without any pretrained embeddings and one dense layer model = Sequential() model.add(Embedding(len(word_index) + 1, 300, input_length=max_len)) model.add(LSTM(128)) model.add(Dense(7, activation='softmax')) model.compile(loss=loss, optimizer='adam', metrics=['accuracy']) model.summary() # - model.fit(xtrain_pad, ytrain,epochs=10,validation_data=(xvalid_pad, yvalid), batch_size=64*strategy.num_replicas_in_sync) #Multiplying by Strategy to run on TPU's model.evaluate(xtest_pad, ytest)
3,828
/bert_finetuning_YouTube_clickbait_domain_adaption_all.ipynb
a9dc250f7b58513aa806fb61a208be57a5891d93
[]
no_license
rahul94jh/MSC-Research
https://github.com/rahul94jh/MSC-Research
5
1
null
null
null
null
Jupyter Notebook
false
false
.py
217,955
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np from matplotlib import pyplot as plt import re from itertools import count import time import statistics data = [x[:-1] for x in open('10.2018.txt').readlines()] data # + findnums = re.compile(r'-?\d+') positions = np.zeros((len(data),2)).astype(int) velocities = np.zeros((len(data),2)).astype(int) for i in range(0, len(data)): find = findnums.findall(data[i]) positions[i] = np.array([int(x) for x in find[0:2]]) velocities[i] = np.array([int(x) for x in find[2:4]]) print(positions, velocities) # + lookahead = np.array(positions) for turns in count(start=10850): if turns > 12000: break lookahead = np.add(lookahead, velocities*turns) time.sleep(0.5) if turns % 2 == 0: plt.clf() else: # - plt.scatter(lookahead[:,0], lookahead[:,1]) # + turns = 10888 plt.clf() lookahead = np.invert(np.add(positions, velocities*turns)) print(lookahead) # - devvs = {} val1 = 0 val2 = 0 summ = 0 for turns in range(10800, 11000): lookahead = np.add(positions, velocities*turns) val1 = np.std(lookahead[:,0]) val2 = np.std(list(lookahead[:,1])) summ = val1 + val2 devvs.update({turns: summ}) devvs np.transpose(positions)[0] earn.metrics import accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score from sklearn.metrics import cohen_kappa_score import tensorflow as tf import tensorflow_hub as hub import tensorflow_text as text import tensorflow_addons as tfa from official.nlp.data import classifier_data_lib from official.nlp.bert import tokenization from official.nlp import optimization from tensorflow import keras from official.nlp import optimization # to create AdamW optmizer AUTO = tf.data.experimental.AUTOTUNE # used in tf.data.Dataset API tf.get_logger().setLevel('ERROR') import sys #Import custom script sys.path.append('/content/drive/MyDrive/Colab Notebooks/clcikbait_detection/scripts') from tf_dataset_helpers import read_tfrec_data import model_helpers as mh import visualization_helpers as vh # + colab={"base_uri": "https://localhost:8080/"} id="5RE9WE8LtUhd" outputId="e1b91bbc-45bd-4bd4-d703-301c686e65aa" print("TF Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print("Hub version: ", hub.__version__) print("GPU is", "available" if tf.config.experimental.list_physical_devices("GPU") else "NOT AVAILABLE") # + [markdown] id="HKcowprstl0r" # #Configs # + [markdown] id="HWD2vnekDCr6" # ##Bert configs # + id="GTNn3LRp7DfR" cellView="form" #@title "Model mappings" bert_model_name = 'bert_en_uncased_L-12_H-768_A-12' map_name_to_handle = { 'bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4', 'bert_en_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/4' } # + colab={"base_uri": "https://localhost:8080/"} id="DSjQAbWngdd6" cellView="form" outputId="f072a85a-9c2f-4040-a752-2c9bd3439f79" #@title "Bert pretrained model downlaod" tfhub_handle_encoder = map_name_to_handle[bert_model_name] print('BERT model selected :', tfhub_handle_encoder) bert_layer = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='bert_encoder') vocab_file = bert_layer.resolved_object.vocab_file.asset_path.numpy() do_lower_case = bert_layer.resolved_object.do_lower_case.numpy() bert_tokenizer = tokenization.FullTokenizer(vocab_file, do_lower_case) # + [markdown] id="fe8RMZlBoEVg" # ##General config # + id="v9DLhbQrtnGd" tfrec_trainVal_files_path = '/content/drive/MyDrive/Colab Notebooks/clcikbait_detection/dataset/YouTube_clickbait/tfrec_data/train_val/' tfrec_test_files_path = '/content/drive/MyDrive/Colab Notebooks/clcikbait_detection/dataset/YouTube_clickbait/tfrec_data/test/' stop_clickbait_tfrecFiles_path = '/content/drive/MyDrive/Colab Notebooks/clcikbait_detection/dataset/Stop_clickbait/tfrec_data/' fvc_clickbait_tfrecFiles_path = '/content/drive/MyDrive/Colab Notebooks/clcikbait_detection/dataset/FVC_CORPUS/tfrec_data/' tfrec_benchmark_files_path = '/content/drive/MyDrive/Colab Notebooks/clcikbait_detection/dataset/invid_fake_video_v2/tfrec_data/' model_root_path = '/content/drive/MyDrive/Colab Notebooks/clcikbait_detection/dataset/YouTube_clickbait/saved_models' saved_model_name = f'youtube_clickbait_finetuned_domain_adapted_all_{bert_model_name}' saved_model_name_dense = f'youtube_clickbait_finetuned_domain_adapted_dense_all_{bert_model_name}' saved_model_path = os.path.join(model_root_path, saved_model_name ) saved_model_path_dense = os.path.join(model_root_path, saved_model_name_dense ) checkpoint_root_path = '/content/drive/MyDrive/Colab Notebooks/clcikbait_detection/dataset/YouTube_clickbait/checkpoints/youtube_clickbait_finetuned_domain_adapted_all' checkpoint_root_path_dense = '/content/drive/MyDrive/Colab Notebooks/clcikbait_detection/dataset/YouTube_clickbait/checkpoints/youtube_clickbait_finetuned_domain_adapted_all' if not os.path.exists(checkpoint_root_path): os.makedirs(checkpoint_root_path) if not os.path.exists(checkpoint_root_path_dense): os.makedirs(checkpoint_root_path_dense) model_checkpoint_path = os.path.join(checkpoint_root_path, f'{bert_model_name}_checkpoint' ) model_checkpoint_path_dense = os.path.join(checkpoint_root_path_dense, f'{bert_model_name}_checkpoint' ) BATCH_SIZE = 64 # Label categories label_list = [0,1] # maximum length of (token) input sequences max_seq_len = 128 init_lr = 2e-5 epochs = 15 # + [markdown] id="Pvp9_oivD_QD" # #Scripts # + id="7e62N59Ttk60" cellView="form" #@title "Utilities [TF Dataset]" def read_tfrecord(example): features = { "class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar "text": tf.io.FixedLenFeature([], tf.string), "label": tf.io.FixedLenFeature([], tf.string) # one bytestring } # decode the TFRecord example = tf.io.parse_single_example(example, features) class_num = example['class'] text = example['text'] label = example['label'] return text, class_num, label def load_dataset(filenames): option_no_order = tf.data.Options() option_no_order.experimental_deterministic = False dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO) dataset = dataset.with_options(option_no_order) dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTO) return dataset def get_batched_dataset(dataset, train=False): if train: dataset = dataset.shuffle(num_train_examples) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) dataset = dataset.cache().prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size) return dataset # + id="8NzGu2hUKfnA" cellView="form" #@title "Utilities [Map bert preprocessing to Dataset]" def to_feature(text, label, label_list=label_list, max_seq_length=max_seq_len, tokenizer=bert_tokenizer): example = classifier_data_lib.InputExample(guid=None, text_a=text.numpy(), text_b=None, label=label.numpy()) feature = classifier_data_lib.convert_single_example(0, example, label_list, max_seq_length, tokenizer) return (feature.input_ids, feature.input_mask, feature.segment_ids, feature.label_id) def to_feature_map(text, label): input_ids, input_mask, segment_ids, label_id = tf.py_function(to_feature, inp=[text, label], Tout=[tf.int32,tf.int32, tf.int32, tf.int32 ]) input_ids.set_shape([max_seq_len]) segment_ids.set_shape([max_seq_len]) input_mask.set_shape([max_seq_len]) label_id.set_shape([]) x = { 'input_word_ids': input_ids, 'input_mask': input_mask, 'input_type_ids':segment_ids } return (x, label_id) # + id="ADXfSXSIWGpt" cellView="form" #@title "Utilities [Create Model Definition]" def create_model(): encoder_inputs = dict( input_word_ids=tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32, name="input_word_ids"), input_mask=tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32, name="input_mask"), input_type_ids=tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32, name="input_type_ids"), ) net = bert_layer(encoder_inputs)['pooled_output'] net = tf.keras.layers.Dropout(0.2)(net) #net = tf.keras.layers.Dense(384, activation='ReLU', name='dense_384')(net) #net = tf.keras.layers.Dropout(0.2)(net) output = tf.keras.layers.Dense(1, activation='sigmoid', name='classifier')(net) model = tf.keras.Model( encoder_inputs, outputs=output, name='prediction' ) return model def create_dense_model(): encoder_inputs = dict( input_word_ids=tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32, name="input_word_ids"), input_mask=tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32, name="input_mask"), input_type_ids=tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32, name="input_type_ids"), ) net = bert_layer(encoder_inputs)['pooled_output'] net = tf.keras.layers.Dropout(0.1)(net) net = tf.keras.layers.Dense(384, activation='ReLU', name='dense_384')(net) net = tf.keras.layers.Dropout(0.1)(net) output = tf.keras.layers.Dense(1, activation='sigmoid', name='classifier')(net) model = tf.keras.Model( encoder_inputs, outputs=output, name='prediction' ) return model # + cellView="form" id="maZRzNq8zhiW" #@title "Utilities [Model prediction]" def get_metrics(y_test, y_pred): # accuracy: (tp + tn) / (p + n) accuracy = accuracy_score(y_test, y_pred) print('Accuracy: %f' % accuracy) # precision tp / (tp + fp) precision = precision_score(y_test, y_pred) print('Precision: %f' % precision) # recall: tp / (tp + fn) recall = recall_score(y_test, y_pred) print('Recall: %f' % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(y_test, y_pred) print('F1 score: %f' % f1) # ROC AUC auc = roc_auc_score(y_test, y_pred) print('ROC AUC: %f' % auc) # confusion matrix matrix = confusion_matrix(y_test, y_pred) print(matrix) def predict_on_test_dataset(model, test_data, BATCH_SIZE=32): y_true=[] for text_feat, labels in test_data: for i in range(BATCH_SIZE): y_true.append(labels[i].numpy()) predicted_scores = model.predict(test_data) y_pred = (predicted_scores > 0.5).astype("int32") y_pred = y_pred.reshape(-1) get_metrics(y_true, y_pred) vh.plot_cm(y_true, predicted_scores) print() # + [markdown] id="HyRKSP2zwYyo" # #Read TFRecord data # + colab={"base_uri": "https://localhost:8080/"} id="sFYhIMQxwy6k" cellView="form" outputId="c8ee9533-ebca-4613-b2a3-576a3ea61a9f" #@title "Load files & Split into Train and Val" # read tfrec files from disk storage read_YouTube_data = read_tfrec_data(tfrec_trainVal_files_path, VALIDATION_SPLIT=0.2, TESTING_SPLIT=0.0, MODE=1) read_stop_clickbait_data = read_tfrec_data(stop_clickbait_tfrecFiles_path, VALIDATION_SPLIT=0.2, TESTING_SPLIT=0.0, MODE=1) read_fvc_clickbait_data = read_tfrec_data(fvc_clickbait_tfrecFiles_path, VALIDATION_SPLIT=0.2, TESTING_SPLIT=0.0, MODE=1) # splitting data files between training, validation and test YouTube_filenames, YouTube_training_filenames, YouTube_validation_filenames, YouTube_testing_filenames = read_YouTube_data.get_tfrec_files() stop_clickbait_filenames, stop_clickbait_training_filenames,stop_clickbait_validation_filenames, stop_clickbait_testing_filenames = read_stop_clickbait_data.get_tfrec_files() fvc_filenames, fvc_training_filenames, fvc_validation_filenames, fvc_testing_filenames = read_fvc_clickbait_data.get_tfrec_files() filenames = YouTube_filenames + stop_clickbait_filenames + fvc_filenames training_filenames = YouTube_training_filenames + stop_clickbait_training_filenames + fvc_training_filenames validation_filenames = YouTube_validation_filenames + stop_clickbait_validation_filenames + fvc_validation_filenames random.shuffle(filenames) random.shuffle(training_filenames) random.shuffle(validation_filenames) print(f'Length FileNames : {len(filenames)} Length Training FileNames : {len(training_filenames)} Length Validation FileNames : {len(validation_filenames)}') num_train_examples = 974 * len(YouTube_training_filenames) + 500 * len(stop_clickbait_training_filenames) + 100 * len(fvc_training_filenames) num_total_examples = 974 * len(YouTube_filenames) + 500 * len(stop_clickbait_filenames) + 100 * len(fvc_filenames) validation_steps = int(num_total_examples // len(filenames) * len(validation_filenames)) // BATCH_SIZE steps_per_epoch = int(num_total_examples // len(filenames) * len(training_filenames)) // BATCH_SIZE num_train_steps = steps_per_epoch * epochs num_warmup_steps = num_train_steps // 10 print("With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(BATCH_SIZE, steps_per_epoch, validation_steps)) # + colab={"base_uri": "https://localhost:8080/"} id="EnFB1lF5p65f" cellView="form" outputId="bb6e9822-7281-4f43-ca0e-49833ce4278c" #@title "Load Test and Benchmarking files" read_test_data = read_tfrec_data(tfrec_test_files_path, VALIDATION_SPLIT=0.0, TESTING_SPLIT=0.0, MODE=1) # Used only for testing read_benchmark_data = read_tfrec_data(tfrec_benchmark_files_path, VALIDATION_SPLIT=0.0, TESTING_SPLIT=0.0, MODE=1) # Used only for benchmarking testing_filenames, _, _,_ = read_test_data.get_tfrec_files() benchmarking_filenames, _, _,_ = read_benchmark_data.get_tfrec_files() len(testing_filenames), len(benchmarking_filenames) # + [markdown] id="dexXAe2b0LQl" # #Load TFRecord into TF Dataset # + id="5a2YkXiW0Osa" # create the TF datasets with tf.device('/cpu:0'): train_ds = load_dataset(training_filenames) val_ds = load_dataset(validation_filenames) test_ds = load_dataset(testing_filenames) benchmark_ds = load_dataset(benchmarking_filenames) # + colab={"base_uri": "https://localhost:8080/"} id="81QxcdL40xMi" outputId="ef247bde-51c0-49c6-f252-c16c2eb9dcfc" for i,(text, class_num, label) in enumerate(train_ds.take(10)): print(f"text : {text.numpy()}, class : {class_num.numpy()}, label : {label.numpy()}") # + id="Y9EYaEqFQKM5" # We need only Text and numeric Label from the dataset with tf.device('/cpu:0'): train_ds = train_ds.map(lambda text, class_num, label:(text, class_num)) val_ds = val_ds.map(lambda text, class_num, label:(text, class_num)) test_ds = test_ds.map(lambda text, class_num, label:(text, class_num)) benchmark_ds = benchmark_ds.map(lambda text, class_num, label:(text, class_num)) # + colab={"base_uri": "https://localhost:8080/"} id="fxubOVzUibud" outputId="39d108da-f2bb-4723-8deb-cd09d478d41d" train_ds.element_spec, val_ds.element_spec, test_ds.element_spec, benchmark_ds.element_spec # + [markdown] id="m4mlvS-W4TDx" # #Modeling # Train model using train set and use validation set for train feedbacks. The goal is that model would be able to learn important aspects about how to catch the clickbaits correctly. # + [markdown] id="Y-C-yuTZILRE" # ##Bert preprocessing # + id="cVTejAUcINXW" with tf.device('/cpu:0'): # train train_data = train_ds.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) train_data = get_batched_dataset(train_data, train=True) # valid val_data = val_ds.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) val_data = get_batched_dataset(val_data) # test test_data = test_ds.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) test_data = get_batched_dataset(test_data) # benchmark benchmark_data = benchmark_ds.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) benchmark_data = get_batched_dataset(benchmark_data) # + colab={"base_uri": "https://localhost:8080/"} id="waCLqUaWSbQY" outputId="caa95172-3ba7-44b5-d529-9b5962f2a381" # train data spec train_data.element_spec # + [markdown] id="CsCwE3RZSi3e" # #Build classifier # + colab={"base_uri": "https://localhost:8080/"} id="sHVKiVfzSkSl" outputId="70fe70d7-1cfd-42b4-ba6a-f481fef05f4a" classifier_model = create_model() classifier_model.summary() # + [markdown] id="t-l3hSAZjmAR" # #Train classifier model # + [markdown] id="JB9jLSmmxlRX" # ##Compile model # + colab={"base_uri": "https://localhost:8080/", "height": 191} id="jT4HWvMbweSY" outputId="d9a8dee6-dff1-4ae0-8f5d-fccb5566acf5" es = tf.keras.callbacks.EarlyStopping( monitor='val_loss', verbose=1, patience=5, mode='min', restore_best_weights=True ) mcb = tf.keras.callbacks.ModelCheckpoint ( filepath=model_checkpoint_path, save_weights_only=True, monitor='val_loss', mode='min', verbose=1, save_best_only=True ) METRICS = [ keras.metrics.TruePositives(name='tp'), keras.metrics.FalsePositives(name='fp'), keras.metrics.TrueNegatives(name='tn'), keras.metrics.FalseNegatives(name='fn'), keras.metrics.BinaryAccuracy(name='accuracy'), keras.metrics.Precision(name='precision'), keras.metrics.Recall(name='recall'), keras.metrics.AUC(name='auc'), keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve ] optimizer = optimization.create_optimizer( init_lr=init_lr, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, optimizer_type='adamw' ) classifier_model.compile( optimizer=optimizer, loss=tf.keras.losses.BinaryCrossentropy(), metrics=METRICS ) tf.keras.utils.plot_model(model=classifier_model, show_shapes=True, dpi=60) # + [markdown] id="H6XKaUuaxqlk" # ##Train # + colab={"base_uri": "https://localhost:8080/"} cellView="form" id="j2x4r2P2oXNF" outputId="bc2e47ae-c7ca-4030-e080-3bf31c70ce2c" #@title Load Model weights if available if os.path.exists(checkpoint_root_path) & len(os.listdir(checkpoint_root_path)): print('loading saved weight') classifier_model.load_weights(model_checkpoint_path) else: print('No weight to initialize') # + colab={"base_uri": "https://localhost:8080/"} id="VN83npGrjnwd" outputId="409bcf0b-ca05-43b4-f60b-cd78c12e34af" history = classifier_model.fit( x=train_data, validation_data=val_data, epochs=epochs, callbacks=[es, mcb] ) # + id="Qq21bsBY5zci" # Save model weights classifier_model.save_weights(model_checkpoint_path) # + [markdown] id="KGm28Sny7Wbg" # #Plot train history # + colab={"base_uri": "https://localhost:8080/", "height": 621} id="de9jdjsl83rU" outputId="67447238-2945-45a7-b638-63c45a458760" vh.display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211) vh.display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212) # + colab={"base_uri": "https://localhost:8080/", "height": 621} id="bMtvCGsZ89PT" outputId="00def595-d37d-4ea3-e554-012ed93ec455" vh.display_training_curves(history.history['recall'], history.history['val_recall'], 'recall', 211) vh.display_training_curves(history.history['precision'], history.history['val_precision'], 'precision', 212) # + [markdown] id="RFpsUaYEl9V5" # # Model Inference # We have trained the classifier, now let's use the trained model to infer the test dataset. Beside the test dataset, we also have benchmark dataset that we would also to see the eefectiveness of our model over benchmark dataset on predicting the clickbait's. # + [markdown] id="1Y9WvTXWlkJX" # ##Model Evaluation # + [markdown] id="il2faBB5uhbz" # ###Evaluate Classifier on Test set # + colab={"base_uri": "https://localhost:8080/"} id="82Q5YKoQulNX" outputId="e0d2e339-ed7f-4800-fd5c-2d0012c9404e" results_test_set = classifier_model.evaluate(test_data) for name, value in zip(classifier_model.metrics_names, results_test_set): print(name, ': ', value) print() # + [markdown] id="EhQMGxtbkqMf" # ### Evaluate Classifier on Benchmarking set # + colab={"base_uri": "https://localhost:8080/"} id="4PF6NHJ5kxpN" outputId="b4652877-4ec3-45e0-9234-26d726da3ed9" results_benchmarking_set = classifier_model.evaluate(benchmark_data) for name, value in zip(classifier_model.metrics_names, results_benchmarking_set): print(name, ': ', value) print() # + [markdown] id="0hyxBLmuwTsg" # ##Export Model for inference # + id="EJ8OWWr6tPvX" colab={"base_uri": "https://localhost:8080/"} outputId="dac1baa6-aa2b-4112-f380-172b201b3670" classifier_model.save(saved_model_path, include_optimizer=False) # + [markdown] id="4dJBXqiE9rlP" # ##Model prediction # + id="5ObHbCj0NVgo" saved_classifier = keras.models.load_model(saved_model_path) # + [markdown] id="vL_bcJVrlr1p" # ### Prediction for Test set # + colab={"base_uri": "https://localhost:8080/", "height": 588} id="QOxoz5tb9ty3" outputId="eb996849-5655-4c74-d136-d8763682fbc3" predict_on_test_dataset(saved_classifier, test_data, BATCH_SIZE=BATCH_SIZE) # + [markdown] id="S-8I5Jnnlyg5" # ### Prediction for Benchmarking set # + colab={"base_uri": "https://localhost:8080/", "height": 588} id="U4phFOUYlXT6" outputId="073c011d-bf7c-493d-f4c2-f70c897cbf1b" predict_on_test_dataset(saved_classifier, benchmark_data, BATCH_SIZE=BATCH_SIZE)
21,564
/Data_Cleaning/Lab_Data_analysis_cleaning_etc_Sol.ipynb
d7ddbaebaecd6655ba5981b9b52a6296014651dc
[]
no_license
dralmadani/DS_GA
https://github.com/dralmadani/DS_GA
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
539,437
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Emergency calls dataset # # Let's analyze another famous and real dataset from Kaggle. The data is on [Emergency - 911 Calls from Montgomery County, PA](https://www.kaggle.com/mchirico/montcoalert). <br> # # Dataset contains the following columns: # * `lat` : Latitude # * `lng`: Longitude # * `desc`: Description of the Emergency Call # * `zip`: Zipcode # * `title`: Title # * `timeStamp`: YYYY-MM-DD HH:MM:SS # * `twp`: Township # * `addr`: Address # * `e`: always 1 # # Let's explore and learn by doing! # + # Some imports for the project import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid') # %matplotlib inline #Setting display format to retina in matplotlib to see better quality images. from IPython.display import set_matplotlib_formats set_matplotlib_formats('retina') # - # **The name of the downloaded data file from kaggle is `911.csv`, please import in pandas dataframe '`df`'.** df = pd.read_csv('Emergency_Calls.csv') # **Can you tell the number of columns and entries in the dataset?** # + # Code here please # - df.info() # **How the data look like, display first 2 rows of the dataset.** # + # Code here please # - df.head(2) # **How many unique townships and title are there in the data?** # + # Code here please # - df['twp'].nunique(), df['title'].nunique() # **Which two township calls the most and what is the call volume?** # + # Code here please # - df['twp'].value_counts().head(2) # **What are the top 5 townships (twp) for 911 calls?** # + # Code here please # - df['twp'].value_counts().head(5) # **How many unique title codes are in the dataset and what are the top ten title codes?** # <br>Use `.format()` with `print` please. # + # Code here please # - print('There are {} nnique title codes in the dataset'.format(df['title'].nunique())) print(df['title'].value_counts().head(10)) # ***If you notice in the title "Traffic: VEHICLE ACCIDENT", we can take "Traffic" as a Reason/or/Department and "VEHICLE ACCIDENT" as code. Either we can create one column with Reason or tow separate with reason and code. Such feature creating is very helpful in the data analysis, let's try this!***<br> # # **Create two new columns "Reason" and "Code" in the data accordingly.**<br> # *Recall `.apply()` with lambda expression and split the string at :* # + # Code here please # - df['Reason'] = df['title'].apply(lambda title: title.split(':')[0]) df['Code'] = df['title'].apply(lambda title: title.split(':')[1]) # **See how the data looks like now (first two rows only), do you see the new columns in your data?** # + # Code here please # - df.head(2) # **How many unique reasons are for the call?** # + # Code here please # - df['Reason'].unique() # **What is the most common Reason to call? create a `countplot` please** # + # Code here please # - sns.countplot(x='Reason',data=df, palette='Set1') # **We have already created two new columns from 'title', if we look at the 'timeStamp' column, it has year, month, day, hour etc which can be useful in many ways. The datatype is string for this column. Let's recheck the type for any location e.g. at `iloc[0]`** # + # Code here please # - df['timeStamp'].iloc[0], type(df['timeStamp'].iloc[0]) # **Convert `'timeStamp'` from string to `DateTime` objects.**<br> # *Hint: [`pd.to_datetime`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html) to convert the column from strings to DateTime objects.* # + # Code here please # - df['timeStamp'] = pd.to_datetime(df['timeStamp']) # **Check the type of any data entry in `'timeStamp'` column and extract month from the `DateTime` object.** # + # Code here please # - time = df['timeStamp'].iloc[50] # grabbing any datapoint in time print("type: ", type(time)) print("Time: ", time) print("Month: ", time.month) print("Year: ", time.year) # **So, we have DataTime objects in the 'timeStamp' columns, great!<br> # We can grab specific attributes from a Datetime object, as we did above. For example:** # # time = df['timeStamp'].iloc[0] # time.hour, time.month etc # # **This is a good idea to do some feature extraction here. Let's create four new columns `'year', 'hour', 'month', 'data' and 'day_of_week'` to your dataframe. Display first two rows to confirm if you have the new columns in the dataset?**<br> # Recall `.apply()` with `lambda` expression! # + # Code here please # - df['year'] = df['timeStamp'].apply(lambda time: time.year) df['hour'] = df['timeStamp'].apply(lambda time: time.hour) df['month'] = df['timeStamp'].apply(lambda time: time.month) df['date'] = df['timeStamp'].apply(lambda time: time.date()) df['week_day'] = df['timeStamp'].apply(lambda time: time.dayofweek) df.head(2) # **Excellent, looks like we have the new columns in the date!<br> # The columns `'week_day'` and `'month'` are an integers from 0 to 6 and 1 to 12. We can pass a dictionary (with integers as key and day/month name as value) to `map()` and create columns 'day_name' and 'month_name' with their actual names.<br>** # # **Use the dictionaries below and create new columns 'day_name' and 'month_name' for the actual day and month using `map()` method.** # # day_map = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'} # month_map = {1:'Jan', 2:'Feb', 3:'Mar', 4:'Apr', 5:'May', 6:'Jun', # 7:'Jul', 8:'Aug', 9:'Sep', 10:'Oct', 11:'Nov', 12:'Dec'} day_map = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'} month_map = {1:'Jan', 2:'Feb', 3:'Mar', 4:'Apr', 5:'May', 6:'Jun', 7:'Jul', 8:'Aug', 9:'Sep', 10:'Oct', 11:'Nov', 12:'Dec'} # + # Code here please # - df['day_name'] = df['week_day'].map(day_map) df['month_name'] = df['month'].map(month_map) df.head(2) # **Which month is the busiest month?** # + # Code here please # - df['month_name'].value_counts().plot(kind='bar') #sns.countplot(x='month',data=df) # few months are missing, sep, oct, nov.. will discuss soon # **Create a countplot for the `week_day` column with the `hue` based on the `Reason` column. Relocate the legend outside of the plot please.**<br> **What is the least common all-time reason for call?** # + # Code here please # + sns.countplot(x='day_name',data=df, hue='Reason',palette='coolwarm') # To relocate the legend plt.legend(bbox_to_anchor=(1,1), loc=2) # - # **How about creating count plot for `month` with `reason` as `hue`?** # + # Code here please # + sns.countplot(x='month_name',data=df,hue='Reason',palette='Set1') # To relocate the legend plt.legend(bbox_to_anchor=(1, 1), loc=2) # - # **Think:** What did you learn from the above two `coutplots`? # **Sep, Oct and Nov months are missing! Let see if there are any null values in month column!** # + # Code here please # - df['month_name'].isnull().sum() # So, actually there is no data entries from these months! # + # Data is missing some months! 9,10, and 11 are not there. # - # **We can fill in the missing information using other ways. Line plot is a simplest one to fix this!**<br> # **Task:**<br> # **Group the data by month and aggregate using `count()`. Use `head()` to display the data.** # + # Code here please # - byMonth = df.groupby('month').count() byMonth.head(12) # **Let's create a simple plot using grouped data against month. We can use any column to plot along y** # + # Code here please # - # Could be any column byMonth['zip'].plot() # **Let's use seaborn's `lmplot()` to create a linear fit on the number of calls per month.** *We may need to reset the index to a column so that we can use it in the lmplot* # + # Code here please # - # Let's reset_index first and get the head of the dataframe data = byMonth.reset_index() data.head() # Now we have moth as a columns, let's get the lmplot! # + # Code here please # - sns.lmplot(x='month',y='twp',data=data) # sns.lmplot(x='month',y='twp',data=byMonth.reset_index()) # in one line code # **We have a column `'date'` in our data.<br> # The emergency department needs to know the flux of calls per day for each reason mentioned in the dataset so that they can take necessary steps to improve the service. <br> # Let's see the variations in the calls flux per day for three reasons (Traffic, Fire, EMS) in the dataset. <br>** # *Hint: We need to group the data for each reason on date column and call the `count()`.* # + # Code here please # - # Group your data first based on three conditions Traffic, Fire, EMS n_calls_byDate_Traffic = df[df['Reason']=='Traffic'].groupby('date').count()#['twp'].plot() n_calls_byData_Fire = df[df['Reason']=='Fire'].groupby('date').count() n_calls_byDate_EMS = df[df['Reason']=='EMS'].groupby('date').count() # + # Code here please # + #create plots for the grouped data fig, (ax1,ax2,ax3) = plt.subplots(nrows=3, figsize=(8, 6), sharex=True,sharey=True) n_calls_byDate_Traffic['zip'].plot(ax=ax1) ax1.set_title('Traffic') n_calls_byData_Fire['zip'].plot(ax=ax2) ax2.set_title('Fire') n_calls_byDate_EMS['zip'].plot(ax=ax3) ax3.set_title('EMS') plt.tight_layout() # - # **The emergency department wants to know the time of the day/night when they expect most calls. Heatmaps and clustermap are great way for this.**<br> # **Let's reshape our data in a way that we get days along index and hours along columns. We can use groupby along with [`unstack()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html) for this purpose. We can call aggregation method `count()` and take any column e.g. `'zip'`, `'Reason'` etc in this process.**<br> # Now, our data will contain 7 rows (one for each day) and 24 columns (one for each hour) # + # Code here please # - by_dayhr=df.groupby(by=['day_name','hour']).count()['Reason'].unstack() by_dayhr.head() # **Let's create a `heatmap` using the dataframe that we have just created to see what time does the emergency crew gets maximum number of calls?**<br> # + # Code here please # - plt.figure(figsize=(12,6)) sns.heatmap(by_dayhr,cmap='coolwarm') # ***Note the change with day and hours, do think the data makes sense? Can you make some conclusions?*** # * Working day after work are the busy hours, people left for home after long day of work and more calls are expected. # * Night hours are mostly quite hours # * Saturday and Sunday has lower flux and the less number of calls are made in early morning. # **Please create a clustermap using your dataframe** # + # Code here please # - sns.clustermap(by_dayhr,cmap='coolwarm') # **Create an other dataframe for months as columns and recreate heatmap and cluster map.** # + # Code here please # - by_dayMonth = df.groupby(by=['day_name','month']).count()['Reason'].unstack() by_dayMonth.head() # + # Code here please # - plt.figure(figsize=(12,6)) sns.heatmap(by_dayMonth)#,cmap='coolwarm') # + # Code here please # - sns.clustermap(by_dayMonth)#,cmap='coolwarm') # Explore [kernels on kaggle](https://www.kaggle.com/mchirico/montcoalert/kernels) to get more ideas on data analysis and visualizations! People have done great work, you can learn from community! # ## Good Luck!
11,408
/05_bayssien network/05_bayesian_network.ipynb
3cd79583ccdea05240db85584cdc5c0798dd0a8c
[]
no_license
rudramuni77/ML
https://github.com/rudramuni77/ML
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
7,174
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import argparse import urllib import json import os import httplib2 import oauth2 { "consumer_key": "AhnQ75Njvid1E0F5OfDka0Mdy", "consumer_secret": "Gza45JFyy6bAEMkZWGx4Edk3yT70lE6cl0nwHKun823lk71cbf", "access_token": "221085452-JpzdsvDoqh5xswyAb2HPHt4oPQxX5S8IP4OHbaGo", "access_token_secret": "fiLvFHqV3HK0aYiSyD75IpxeUNhYqW7ccdsDzLvIRzE6D" } class TwitterData: def parse_config(self): config = {} # from file args if os.path.exists('config.json'): with open('config.json') as f: config.update(json.load(f)) else: # may be from command line parser = argparse.ArgumentParser() parser.add_argument('-ck', '--consumer_key', default=None, help='Your developper `Consumer Key`') parser.add_argument('-cs', '--consumer_secret', default=None, help='Your developper `Consumer Secret`') parser.add_argument('-at', '--access_token', default=None, help='A client `Access Token`') parser.add_argument('-ats', '--access_token_secret', default=None, help='A client `Access Token Secret`') args_ = parser.parse_args() def val(key): return config.get(key)\ or getattr(args_, key)\ or raw_input('Your developper `%s`: ' % key) config.update({ 'consumer_key': val('consumer_key'), 'consumer_secret': val('consumer_secret'), 'access_token': val('access_token'), 'access_token_secret': val('access_token_secret'), }) # should have something now return config #end def oauth_req(self, url, http_method="GET", post_body=None, http_headers=None): config = self.parse_config() consumer = oauth.Consumer(key=config.get('consumer_key'), secret=config.get('consumer_secret')) token = oauth.Token(key=config.get('access_token'), secret=config.get('access_token_secret')) client = oauth.Client(consumer, token) resp, content = client.request( url, method=http_method, body=post_body or '', headers=http_headers ) return content #end #start getTwitterData def getData(self, keyword, params = {}): maxTweets = 50 url = 'https://api.twitter.com/1.1/search/tweets.json?' data = {'q': keyword, 'lang': 'en', 'result_type': 'recent', 'count': maxTweets, 'include_entities': 0} #Add if additional params are passed if params: for key, value in params.iteritems(): data[key] = value url += urllib.parse.urlencode(data) response = self.oauth_req(url) jsonData = json.loads(response) tweets = [] if 'errors' in jsonData: print("API Error") print(jsonData['errors']) else: for item in jsonData['statuses']: tweets.append(item['text']) return tweets #end #end class ## Usage ## ===== td = TwitterData() print(td.getData('#IChoosePeaceUG')) # -
3,448
/linear-reg-bikeshare.ipynb
b94bcab2cec50903850a6adbedae50cfa32d6538
[]
no_license
amyforza/For-Transfer
https://github.com/amyforza/For-Transfer
1
0
null
null
null
null
Jupyter Notebook
false
false
.py
528,371
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np 300_400= np.random.randint(5000, 10000,size=1000) import pandas as pd market_capital = pd.DataFrame(dataset, columns=['market_capital']) market_capital.reset_index(inplace=True) print(market_capital.head()) market_capital.plot(kind='scatter', x='index', y='market_capital', alpha=0.5) from pandas.plotting import scatter_matrix scatter_matrix(market_capital[['market_capital']], figsize=(12,8)) cialisation without mentor’s consent. My contribution won’t demand any claim in future for further progress of Mentor’s development and innovation along the direction unless there is a special continuous involvement.* # # # + [markdown] id="BU83bgdAVj_Y" # We use the decision tree algorithm implemented in the first part of the project as a blackbox and implement AdaBoost.RT: a Boosting Algorithm for Regression Problems. # + [markdown] id="YjYYuTPcZGZQ" # ##The Algorithm # # # 1. Input # # * Sequence of $n$ examples $(x_1, y_1), (x_2, y_2),..., (x_n, y_n)$ where $y \in R$. # * Weak learning algorithm _Weak Learner_ (Decision tree algorithm in our case) # * Integer **T** specifying number of iterations # * Threshold $\phi$ for demacrating correct and incorrect predictions for AdaBoost.RT # # 2. Initialize # # * Machine number or iteration $t = 1$ # * Distribution $D_t(i) = 1 / m$ for all $i$ # # 3. Iterate While $t \leq T$ # # * Call _Weak Learner_, providing it with distribution $D_t$ # * Build the regressor model: $f_t(x) \xrightarrow{} y$ # * Calculate the error rate of $f_t(x)$: # $\epsilon_t = \sum_{i: \left| \frac{f_t(x_i) - y_i}{y_i}\right| > \phi} D_t(i)$ # # * Set $\beta_t = \epsilon_t / (1 - \epsilon_t)$ # * Update distribution $D_t$ as # $D_{t+1}(i) = \frac{D_t(i)}{Z_t} \times \begin{cases} \beta_t & \text{if} \enspace \left| \frac{f_t(x_i) - y_i}{y_i}\right| \leq \phi \\ 1 & \text{otherwise} \end{cases}$ where $Z_t$ is a normalization factor chosen such that $D_{t+1}$ will be a distribution. # * Set $t = t + 1$ # # 4. Output the final hypothesis: # # * $f_{fin}(x) = \frac{\sum_t log(1 / \beta_t) f_t(x)}{ \sum_t log(1 / \beta_t)}$ # # # # # # # + [markdown] id="uaF39Fq8mcwr" # ##Main Idea # * $\epsilon_t$ is computed using the notion of a pre-set threshold $\phi$, which is used to demarcate prediction error as correct or incorrect. If the absolute relative error (ARE) for any particular example is greater than $\phi$, the predicted value for this example is considered to be incorrect, otherwise it is correct. The numbers of correct and incorrect predictions are counted to calculate $\epsilon_t$. # * To compute the distribution for the next machine, we # multiply the weight of each example by $\beta_t$ if the previous machine classifies or predicts this example correctly (this reduces lhe weight of the example), and otherwise the weight remains unchanged. Thus it seems that the regression problem in AdaBoost.RT is projected into the binary classification problems while updating the weights of the examples. # + [markdown] id="W8QL6wObsPb9" # ###References # Link: https://ieeexplore.ieee.org/document/1380102 # + [markdown] id="IwRcPVOVpZd1" # ##The Decision Tree Algorithm # + id="GiCATaKeg04R" import numpy as np import sklearn.metrics as metrics class TreeNode: def __init__(self): self.predicted_value = None self.decision_feature = None self.decision_value = None self.left_node = None self.right_node = None class DecisionTree: def __init__(self): self.root_node = None self.min_coef_of_var = None self.reductions = None def fit(self, X, Y, min_coef_of_var = 0, pruning_factor = 0): self.min_coef_of_var = min_coef_of_var self.root_node = self.build(X, Y) self.prune_decision_tree(X, Y, self.root_node, pruning_factor) self.reductions = np.zeros(len(X[0])) self.reductions = self.calculate_reductions(X, Y, self.root_node, self.reductions) def get_decision_value_for_feature(self, X, Y, decision_feature): Z = np.empty((0, 2), float) for i in range(len(Y)): Z = np.append(Z, [[X[i, decision_feature], Y[i]]], axis = 0) Z = np.sort(Z, axis = 0) left_cardinality = 0 right_cardinality = 0 left_sum = 0 right_sum = 0 left_sum_of_squares = 0 right_sum_of_squares = 0 for i in range(len(Z)): right_cardinality += 1 right_sum += Z[i, 1] right_sum_of_squares += Z[i, 1] ** 2 current_best_impurity = right_sum_of_squares current_best_decision_value = Z[0, 0] - 1 for i in range(len(Z)): left_cardinality += 1 left_sum += Z[i, 1] left_sum_of_squares += Z[i, 1] ** 2 right_cardinality -= 1 right_sum -= Z[i, 1] right_sum_of_squares -= Z[i, 1] ** 2 impurity = left_sum_of_squares - left_sum * left_sum / left_cardinality if (right_cardinality != 0): impurity += right_sum_of_squares - right_sum * right_sum / right_cardinality if (impurity < current_best_impurity): current_best_impurity = impurity current_best_decision_value = Z[i, 0] return current_best_decision_value def get_impurity(self, X, Y, decision_feature, decision_value_for_feature): left_cardinality = 0 right_cardinality = 0 left_sum = 0 right_sum = 0 for i in range(len(Y)): if (X[i, decision_feature] <= decision_value_for_feature): left_cardinality += 1 left_sum += Y[i] else: right_cardinality += 1 right_sum += Y[i] if (left_cardinality != 0): left_mean = left_sum / left_cardinality else: left_mean = 0 if (right_cardinality != 0): right_mean = right_sum / right_cardinality else: right_mean = 0 impurity = 0 for i in range(len(Y)): if (X[i, decision_feature] <= decision_value_for_feature): impurity += (Y[i] - left_mean) ** 2 else: impurity += (Y[i] - right_mean) ** 2 return impurity def divide_data(self, X, Y, decision_feature, decision_value): number_of_features = len(X[0]) X_left = np.empty((0, number_of_features), float) X_right = np.empty((0, number_of_features), float) Y_left = np.empty(0, float) Y_right = np.empty(0, float) for i in range(len(X)): if (X[i, decision_feature] <= decision_value): X_left = np.append(X_left, [X[i]], axis = 0) Y_left = np.append(Y_left, [Y[i]], axis = 0) else: X_right = np.append(X_right, [X[i]], axis = 0) Y_right = np.append(Y_right, [Y[i]], axis = 0) return X_left, Y_left, X_right, Y_right def build(self, X, Y): if (len(X) == 0): return root_node = TreeNode() node_mean = np.mean(Y) root_node.predicted_value = node_mean node_deviation = np.std(Y) # Do not split if (node_deviation / node_mean < self.min_coef_of_var): root_node.decision_value = 0 root_node.decision_feature = 0 return if (np.amin(Y) == np.amax(Y)): root_node.decision_value = 0 root_node.decision_feature = 0 return root_node number_of_features = len(X[0]) current_best_feature = -1 current_best_decision_value = -1 current_best_impurity = -1 for i in range(number_of_features): decision_value_for_feature = self.get_decision_value_for_feature(X, Y, i) impurity_of_decision_value = self.get_impurity(X, Y, i, decision_value_for_feature) if (current_best_feature == -1 or impurity_of_decision_value < current_best_impurity): current_best_feature = i current_best_decision_value = decision_value_for_feature current_best_impurity = impurity_of_decision_value root_node.decision_feature = current_best_feature root_node.decision_value = current_best_decision_value X_left, Y_left, X_right, Y_right = self.divide_data(X, Y, root_node.decision_feature, root_node.decision_value) if (len(Y_left) == 0 or len(Y_right) == 0): return root_node root_node.left_node = self.build(X_left, Y_left) root_node.right_node = self.build(X_right, Y_right) return root_node def prune_decision_tree(self, X, Y, root_node, pruning_factor): if (root_node == None): return 0 X_left, Y_left, X_right, Y_right = self.divide_data(X, Y, root_node.decision_feature, root_node.decision_value) left_tree_size = self.prune_decision_tree(X_left, Y_left, root_node.left_node, pruning_factor) right_tree_size = self.prune_decision_tree(X_right, Y_right, root_node.right_node, pruning_factor) tree_size = 1 + left_tree_size + right_tree_size impurity = 0 for y in Y: impurity += (y - root_node.predicted_value) ** 2 divided_impurity = self.get_impurity(X, Y, root_node.decision_feature, root_node.decision_value) pruning_value = impurity / len(Y) - divided_impurity / len(Y) - pruning_factor * tree_size if (pruning_value < 0): root_node.left_node = None root_node.right_node = None return 1 else: return tree_size def calculate_reductions(self, X, Y, root_node, reductions): if (root_node == None): return reductions impurity = 0 for y in Y: impurity += (y - root_node.predicted_value) ** 2 for i in range(len(X[0])): decision_value_for_feature = self.get_decision_value_for_feature(X, Y, i) impurity_of_decision_value = self.get_impurity(X, Y, i, decision_value_for_feature) reductions[i] += impurity - impurity_of_decision_value X_left, Y_left, X_right, Y_right = self.divide_data(X, Y, root_node.decision_feature, root_node.decision_value) reductions = self.calculate_reductions(X_left, Y_left, root_node.left_node, reductions) reductions = self.calculate_reductions(X_right, Y_right, root_node.right_node, reductions) return reductions def get_reductions(self): return self.reductions def predict_single(self, X): current_node = self.root_node while (True): if (X[current_node.decision_feature] <= current_node.decision_value): if (current_node.left_node != None): current_node = current_node.left_node else: return current_node.predicted_value else: if (current_node.right_node != None): current_node = current_node.right_node else: return current_node.predicted_value def predict(self, X): Y = np.empty(0, float) for x in X: Y = np.append(Y, [self.predict_single(x)], axis = 0) return Y # + [markdown] id="CtYHmbuPpj5i" # ## AdaBoostRT # + id="UFU0FbhFNknu" class AdaBoostRT: def __init__(self): self.trees = [] self.beta = [] self.number_of_iterations = None def fit(self, X, y, threshold_phi = 0.3, number_of_iterations = 20, min_coef_of_var = 0.025, pruning_factor = 0.1): self.number_of_iterations = number_of_iterations N = len(X) D = [1 / N] * N for t in range(number_of_iterations): ids = self.find_ids_from_distribution(D) X_new, y_new = self.find_new_X_y(X, y, ids) new_tree = DecisionTree() new_tree.fit(X_new, y_new, min_coef_of_var, pruning_factor) y_pred = new_tree.predict(X) self.trees.append(new_tree) eps_t = self.cal_error_rate(D, y, y_pred, threshold_phi) self.beta.append(eps_t / (1 - eps_t)) D = self.improved_distribution(D, y, y_pred, threshold_phi, self.beta[t]) def predict_single(self, X): weighted_y = 0 total_weight = 0 for i in range(self.number_of_iterations): weighted_y += np.log(1 / self.beta[i]) * self.trees[i].predict_single(X) total_weight += np.log(1 / self.beta[i]) return weighted_y / total_weight def predict(self, X): y = np.empty(0, float) for x in X: y = np.append(y, [self.predict_single(x)], axis = 0) return y def improved_distribution(self, D, y, y_pred, threshold_phi, beta): for i in range(len(D)): if (abs((y_pred[i] - y[i]) / y[i]) <= threshold_phi): D[i] *= beta Z = sum(D) for i in range(len(D)): D[i] /= Z return D def cal_error_rate(self, D, y, y_pred, threshold_phi): eps_t = 0 for i in range(len(y)): if (abs((y_pred[i] - y[i]) / y[i]) > threshold_phi): eps_t += D[i] return eps_t def find_new_X_y(self, X, y, ids): # samples new X, y number_of_features = len(X[0]) X_new = np.empty((0, number_of_features), float) y_new = np.empty(0, float) for i in ids: X_new = np.append(X_new, [X[i]], axis = 0) y_new = np.append(y_new, [y[i]], axis = 0) return X_new, y_new def find_ids_from_distribution(self, D): # genarates N discrete random variable from the given distribution N = len(D) cdf = [D[0]] ids = [] for i in range(N - 1): cdf.append(cdf[i] + D[i+1]) r = np.random.uniform(0, 1, N) for i in range(N): ids.append(self.find_id_from_cdf(r[i], cdf)) return ids def find_id_from_cdf(self, r, cdf): # genarates one discrete random variable from the given CDF using binary search N = len(cdf) low = 0 high = N - 1 while (low != high): mid = (low + high + 1) // 2 if (cdf[mid] < r): low = mid else: high = mid - 1 return low # + [markdown] id="3MAKXsDFp1Uz" # We start by testing our algorithm on boston housing dataset from sklearn library. # + id="F6jNPa1xbC3r" colab={"base_uri": "https://localhost:8080/", "height": 0} outputId="ac7229a0-31ab-4a53-a161-ce3baeb5de8f" from sklearn.datasets import load_boston data = load_boston(return_X_y=False) X = data.data y = data.target from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1) print(X_train.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="fD98v_y6KxsE" outputId="5d58d4bd-35a0-45a8-9077-eb9c8fb096f8" import time start_time = time.time() decision_tree = DecisionTree() decision_tree.fit(X_train, y_train, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = decision_tree.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + [markdown] id="3k2r_D8_qL1T" # We can see that _weak learner_ algorithm (Decision Tree) gives an accuracy of (~71%). # + [markdown] id="yFt2KyxIyKQo" # We run our algorithm mutiple times with different parameters. # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="h0aO9_R_bF0h" outputId="5241643b-6882-48f6-e548-670f1256e95f" start_time = time.time() adaBoostRT = AdaBoostRT() adaBoostRT.fit(X_train, y_train, threshold_phi = 0.5, number_of_iterations = 20, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = adaBoostRT.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="zeFUyOnNx-zB" outputId="35d1e41a-adb4-4e36-dca0-2a1483aae44c" start_time = time.time() adaBoostRT = AdaBoostRT() adaBoostRT.fit(X_train, y_train, threshold_phi = 0.5, number_of_iterations = 40, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = adaBoostRT.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="OFqSrI6_yATU" outputId="1c68e1b6-63e9-4ca2-c5f6-6732c9999007" start_time = time.time() adaBoostRT = AdaBoostRT() adaBoostRT.fit(X_train, y_train, threshold_phi = 0.3, number_of_iterations = 20, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = adaBoostRT.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="8hyCEh5TyDIp" outputId="7f41f462-f453-47be-bedf-bce57e4c2cad" start_time = time.time() adaBoostRT = AdaBoostRT() adaBoostRT.fit(X_train, y_train, threshold_phi = 0.3, number_of_iterations = 40, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = adaBoostRT.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + [markdown] id="vjqUKlXRqdIf" # AdaBoost.RT improves the accuract of _weak learner_ from (\~71%) to \(~84%), which is a huge improvemnt. # + [markdown] id="5npaNG14q0Na" # A plot of predicted value v/s actual value is present below. # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="1rd_xKGGlMoW" outputId="62d93506-d61d-4836-c919-5e876460fddb" import matplotlib.pyplot as plt plt.plot(y_test, y_test, color = "green") plt.xlabel("actual") plt.ylabel("predicted") plt.scatter(y_test, y_pred) plt.show() # + [markdown] id="I4n36mxTrBrF" # We also test our algorithm on california housing dataset from sklearn library. # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="VDsDHAs5GlfU" outputId="d1f78f43-a82c-40dc-c671-4519acc86407" N = 1000 from sklearn.datasets import fetch_california_housing data = fetch_california_housing(return_X_y=False) X = data.data[:N] y = data.target[:N] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) print(X_train.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="QDu4RUIFp5PO" outputId="afbc67e9-65fd-4536-c9c2-8bdbbfeac41f" start_time = time.time() decision_tree = DecisionTree() decision_tree.fit(X_train, y_train, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = decision_tree.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + [markdown] id="fx_bR7EorNNy" # We can see that _weak learner_ algorithm (Decision Tree) gives an accuracy of (~61%). # + [markdown] id="3ACgl3E_xnbK" # We run our algorithm mutiple times with different parameters. # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="DCMK1_pFxib8" outputId="b51fecb2-9d06-4099-866a-89514eec93f5" start_time = time.time() adaBoostRT = AdaBoostRT() adaBoostRT.fit(X_train, y_train, threshold_phi = 0.3, number_of_iterations = 80, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = adaBoostRT.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + id="M8NNskCUrMbL" colab={"base_uri": "https://localhost:8080/", "height": 0} outputId="68dfe633-ccf7-434b-d2ce-055d93f6ff33" start_time = time.time() adaBoostRT = AdaBoostRT() adaBoostRT.fit(X_train, y_train, threshold_phi = 0.3, number_of_iterations = 40, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = adaBoostRT.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="ZdO1XyKMxZpZ" outputId="3db2c4b3-9924-4e12-f1bf-0a0d9e12d486" start_time = time.time() adaBoostRT = AdaBoostRT() adaBoostRT.fit(X_train, y_train, threshold_phi = 0.5, number_of_iterations = 80, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = adaBoostRT.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="maBeEhhSyzaI" outputId="39e956bb-09b9-4499-ad19-69b2a4ea1809" start_time = time.time() adaBoostRT = AdaBoostRT() adaBoostRT.fit(X_train, y_train, threshold_phi = 0.5, number_of_iterations = 40, min_coef_of_var = 0.025, pruning_factor = 0) y_pred = adaBoostRT.predict(X_test) print('R2 Score: ', metrics.r2_score(y_test, y_pred)) print('Time taken to run: ', time.time() - start_time) # + [markdown] id="pQNM7dIJrzbK" # AdaBoost.RT improves the accuract of _weak learner_ from (\~61%) to \(~71%), which is a huge improvement. # + [markdown] id="8QMfwMypr7ri" # A plot of predicted value v/s actual value is present below. # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="cpX0u7IlK0Yo" outputId="4fcea16b-f47b-457a-bb90-ea76cfa0a8f3" import matplotlib.pyplot as plt plt.plot(y_test, y_test, color = "green") plt.xlabel("actual") plt.ylabel("predicted") plt.scatter(y_test, y_pred) plt.show()
21,846
/Tutorials/PandasCookbook_Code/Chapter06/Chapter 6 Index Alignment.ipynb
6803499ea136d8b22325d3668b86eb996a6878f7
[]
no_license
l-szczyrba/Misc_ClassScripts
https://github.com/l-szczyrba/Misc_ClassScripts
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
216,970
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np from bokeh.plotting import figure from bokeh.io import show,output_notebook from bokeh.layouts import gridplot from bokeh.models import LinearColorMapper from bokeh.palettes import Category20,Plasma output_notebook() data = pd.read_csv('../home_sales/kc_house_data.csv') data.describe() data['zipcode'].value_counts() # + data = data[(data['sqft_living']>=1400) & (data['sqft_living']<2500)] data = data[(data['price']>=300000) & (data['price']<=575000)] data = data[(data['sqft_lot']>=5000) & (data['sqft_lot']<=10000)] data = data[data['zipcode']==98103] # - data['zipcode'] data.describe() data.describe() from bokeh.transform import linear_cmap source=ColumnDataSource(data) colors = [Category20[20][int(4*x)] for x in data['bathrooms']] f=figure() f.scatter(x='sqft_living',y='price',source=source,color=linear_cmap('zipcode',Plasma[8],low=98000,high=98200)) show(f) from itertools import combinations figs=[] for x,y in combinations(['living','bedrooms','price'],2): f=figure() f.scatter(x=d[x],y=d[y]) f.xaxis.axis_label = x f.yaxis.axis_label = y figs.append(f) g=gridplot([figs[0:2],[figs[2]]],plot_height=250,plot_width=250) show(g) E = np.ones((d['living'].shape[0],)) E.shape X = np.stack([d['living'],d['lot']]).transpose() Y = d['price'] X0=(X-X.mean(axis=0))/100000 Y0 = (Y-Y.mean(axis=0))/100000 D0 = np.dot(X0.transpose(),X0) D0 D0inv=np.linalg.inv(D0) M=np.dot(D0inv,np.dot(X0.transpose(),Y0)) M Y0hat = np.dot(X0[:,0],M[0]) f=figure() f.scatter(x=X0[:,0],y=Y0,color=X0[:,1]) f.line(x=X0[:,0],y=Y0hat,color='red',line_width=4) show(f) adverts = pd.read_csv('../isl_advertising/Advertising.csv',index_col=0) adverts from bokeh.models import ColumnDataSource source=ColumnDataSource(adverts) fig=figure() fig.scatter(x='TV',y='radio',source=source) show(fig) data = pd.read_csv('../yacht_hydrodynamics/yacht_hydrodynamics.data',delim_whitespace=True,header=None) fig=figure() fig.scatter(x=data[1].values,y=data[6].values) show(fig) sizes = np.random.normal() pwd # cd ../esl_prostate prostate = pd.read_csv('prostate.data',delim_whitespace=True) prostate source = ColumnDataSource(prostate) fig=figure() fig.scatter(x='lcavol',y='lpsa',source=source,color=linear_cmap('lweight',Plasma[8],low=2,high=5)) show(fig) prostate.describe() from sklearn.linear_model import LinearRegression L=LinearRegression() # + L.fit(prostate[['lcavol','lweight']],prostate['lpsa']) L.intercept_ L.coef_ yhat=L.predict(prostate[['lcavol','lweight']]) f=figure() f.scatter(x=prostate['lpsa'],y=yhat) f.line(x=[0,5],y=[0,5]) show(f) # - E = np.sum(np.square(yhat-prostate['lpsa'])) E # + L.fit(prostate['lcavol'].values.reshape(-1,1),prostate['lpsa']) L.intercept_ L.coef_ yhat=L.predict(prostate['lcavol'].values.reshape(-1,1)) f=figure() f.scatter(x=prostate['lpsa'],y=yhat) f.line(x=[0,5],y=[0,5]) show(f) # - E = np.sum(np.square(yhat-prostate['lpsa'])) E ry = max_dept_salary.set_index('DEPARTMENT') employee = employee.set_index('DEPARTMENT') employee['MAX_DEPT_SALARY'] = max_dept_salary['BASE_SALARY'] pd.options.display.max_columns = 6 employee.head() employee.query('BASE_SALARY > MAX_DEPT_SALARY') # ## How it works... np.random.seed(1234) random_salary = dept_salary.sample(n=10).set_index('DEPARTMENT') random_salary employee['RANDOM_SALARY'] = random_salary['BASE_SALARY'] # ## There's more... employee['MAX_SALARY2'] = max_dept_salary['BASE_SALARY'].head(3) employee.MAX_SALARY2.value_counts() employee.MAX_SALARY2.isnull().mean() # # Highlighting maximum value from each column pd.options.display.max_rows = 8 college = pd.read_csv('data/college.csv', index_col='INSTNM') college.dtypes college.MD_EARN_WNE_P10.value_counts().head() college['MD_EARN_WNE_P10'] = pd.to_numeric(college.MD_EARN_WNE_P10, errors='coerce') college['GRAD_DEBT_MDN_SUPP'] = pd.to_numeric(college.GRAD_DEBT_MDN_SUPP, errors='coerce') college.dtypes.loc[['MD_EARN_WNE_P10', 'GRAD_DEBT_MDN_SUPP']] college_numeric = college.select_dtypes(include=[np.number]) college_numeric.head() # only numeric columns criteria = college_numeric.nunique() == 2 criteria.head() binary_cols = college_numeric.columns[criteria].tolist() binary_cols college_numeric2 = college_numeric.drop(labels=binary_cols, axis='columns') college_numeric2.head() max_cols = college_numeric2.idxmax() max_cols unique_max_cols = max_cols.unique() unique_max_cols[:5] college_numeric2.loc[unique_max_cols].style.highlight_max() college_numeric2.loc[max_cols.values] # # Replicating idxmax with method chaining college = pd.read_csv('data/college.csv', index_col='INSTNM') college['MD_EARN_WNE_P10'] = pd.to_numeric(college.MD_EARN_WNE_P10, errors='coerce') college['GRAD_DEBT_MDN_SUPP'] = pd.to_numeric(college.GRAD_DEBT_MDN_SUPP, errors='coerce') college_numeric = college.select_dtypes(include=[np.number]) criteria = college_numeric.nunique() == 2 binary_cols = college_numeric.columns[criteria].tolist() college_numeric = college_numeric.drop(labels=binary_cols, axis='columns') college_numeric.max().head() college_numeric.eq(college_numeric.max()).head() has_row_max =college_numeric.eq(college_numeric.max()).any(axis='columns') has_row_max.head() college_numeric.shape has_row_max.sum() college_numeric.eq(college_numeric.max()).cumsum().cumsum().eq(1).any(axis='columns')[lambda x: x] # %timeit college_numeric2.idxmax() # + # college_numeric2.idxmax?? # - pd.options.display.max_rows=6 college_numeric.eq(college_numeric.max()).cumsum() college_numeric.eq(college_numeric.max()).cumsum().cumsum() college_numeric.eq(college_numeric.max()).cumsum().cumsum() college_idxmax = college_numeric.eq(college_numeric.max())\ .cumsum()\ .cumsum()\ .eq(1)\ .any(axis='columns') college_idxmax.head() idxmax_cols = college_idxmax[college_idxmax] idxmax_cols set(college_numeric.idxmax().unique()) == set(idxmax_cols.index)
6,338
/example/SampleMethods/RSS/RSS_Example4.ipynb
f69b848d0aaec847334a2214b5227485295c3ae5
[ "MIT" ]
permissive
dimtsap/UQpy
https://github.com/dimtsap/UQpy
0
1
MIT
2021-02-10T22:42:14
2021-02-09T20:41:06
Jupyter Notebook
Jupyter Notebook
false
false
.py
152,512
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Refined Stratified Sampling Example 4 # Author: Mohit S. Chauhan Date: Jan 25, 2019 # In this example, Stratified sampling is used to generate samples from Uniform distribution and sample expnsion is done adaptively using Refined Stratified Sampling. # Import the necessary libraries. Here we import standard libraries such as numpy, matplotlib and other necessary library for plots, but also need to import the STS, RSS and Krig class from UQpy. from UQpy.SampleMethods import VoronoiStrata, VoronoiSTS, VoronoiRSS from UQpy.Surrogates import Kriging from UQpy.RunModel import RunModel from UQpy.Distributions import Uniform import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits import mplot3d from matplotlib.ticker import LinearLocator, FormatStrFormatter import matplotlib.patches as patches import numpy as np from scipy.spatial import Delaunay from scipy.spatial import Voronoi, voronoi_plot_2d from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import Matern # Create a distribution object. marginals = [Uniform(loc=0., scale=1.), Uniform(loc=0., scale=1.)] strata = VoronoiStrata(nseeds=16, dimension=2) # Using UQpy STS class to generate samples for two random variables, which are uniformly distributed between 0 and 1. x = VoronoiSTS(dist_object=marginals, strata_object=strata, nsamples_per_stratum=1, random_state=1) # This plot shows the samples and stratas generated by the STS class. fig = voronoi_plot_2d(x.strata_object.voronoi) plt.title('Stratified Samples (U(0,1)) - Voronoi Stratification') plt.plot(x.samples[:, 0], x.samples[:, 1], 'dm') plt.ylim(0, 1) plt.xlim(0, 1) plt.show() # RunModel class is used to define an object to evaluate the model at sample points. rmodel = RunModel(model_script='python_model_function.py') # This figure shows the actual function defined in python model script. # + rmodel1 = RunModel(model_script='python_model_function.py') rmodel1.run(samples=x.samples) num = 50 x1 = np.linspace(0, 1, num) x2 = np.linspace(0, 1, num) x1v, x2v = np.meshgrid(x1, x2) y_act = np.zeros([num, num]) r1 = RunModel(model_script='python_model_function.py') for i in range(num): for j in range(num): r1.run(samples=np.array([[x1v[i, j], x2v[i, j]]])) y_act[i, j] = r1.qoi_list[-1] fig1 = plt.figure() ax = fig1.gca(projection='3d') # Plot for estimated values surf = ax.plot_surface(x1v, x2v, y_act, cmap=cm.coolwarm, linewidth=0, antialiased=False) # Customize the z axis. ax.set_zlim(-1, 15) ax.zaxis.set_major_locator(LinearLocator(10)) ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f')) # Add a color bar which maps values to colors. fig1.colorbar(surf, shrink=0.5, aspect=5) plt.show() # - # Scikit-learn Gaussian Process Regrssor is used to generated a surrogate model using STS samples and function value at those points. User can also run the same example script using Kriging class from UQpy library. # + k1 = 1.0 * Matern(length_scale=1.0, length_scale_bounds=(1e-1, 10.0), nu=1.5) K = GaussianProcessRegressor(kernel=k1, n_restarts_optimizer=5) # K = Kriging(reg_model='Linear', corr_model='Exponential', n_opt=10, corr_model_params=[1, 1]) # - # This figure shows the surrogate model generated using Krig class based on initial samples. Note that, user don't have to fit the surrogate model before executing RSS class, it is done here to show the 3-D plot. # + K.fit(x.samples, rmodel1.qoi_list) num = 25 x1 = np.linspace(0, 1, num) x2 = np.linspace(0, 1, num) x1v, x2v = np.meshgrid(x1, x2) y = np.zeros([num, num]) for i in range(num): for j in range(num): y[i, j] = K.predict(np.array([[x1v[i, j], x2v[i, j]]])) fig2 = plt.figure() ax2 = fig2.gca(projection='3d') # Plot for estimated values kr = ax2.plot_wireframe(x1v, x2v, y, color='Green', label='Kriging interpolate') # Plot for scattered data ID = ax2.scatter3D(x.samples[:, 0], x.samples[:, 1], rmodel1.qoi_list, color='Red', label='Input data') plt.legend(handles=[kr, ID]) plt.show() # - # Using UQpy RSS class to expand samples generated by STS class. In this example, meta specifies the method used to estimate the gradient and Voronoi cells are used for stratification. Krig class is used with 'Gaussian' correlation model. z = VoronoiRSS(sample_object=x, runmodel_object=rmodel, krig_object=K, random_state=2) # After initiating the RSS class object, new samples are generated using the RSS.sample method. z.run(nsamples=200) # This figure shows the final samples generated using RSS class, where red dots shows the initial samples. fig = voronoi_plot_2d(z.strata_object.voronoi) plt.title('Gradient Enhanced RSS - Voronoi Stratification') plt.plot(z.samplesU01[:, 0], z.samplesU01[:, 1], 'dm') plt.ylim(0, 1) plt.xlim(0, 1) plt.show() # This figure shows the final surrogate model, generated using 200 samples. # + y = np.zeros([num, num]) for i in range(num): for j in range(num): y[i, j] = z.krig_object.predict(np.array([[x1v[i, j], x2v[i, j]]])) plt.clf() fig4 = plt.figure() a2 = fig4.gca(projection='3d') # Plot for estimated values kr = a2.plot_wireframe(x1v, x2v, y, color='Green', label='Kriging interpolate') # Plot for scattered data ID = a2.scatter3D(z.samples[:, 0], z.samples[:, 1], z.runmodel_object.qoi_list, color='Red', label='Input data') plt.legend(handles=[kr, ID]) plt.show()
5,639
/Sponsor/notebooks/Ensemble Methods.ipynb
ccea55a4aefdc5818f768a3516cec0661573ecd7
[]
no_license
ErikPohle/CSCI-4308-TerumoBCT
https://github.com/ErikPohle/CSCI-4308-TerumoBCT
0
0
null
2021-04-28T22:01:25
2021-04-28T20:07:21
null
Jupyter Notebook
false
false
.py
17,637
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd from functools import reduce import numpy as np import gc import matplotlib.pyplot as plt import numpy as np from sklearn.ensemble import BaggingClassifier as BaggingClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import classification_report, confusion_matrix from sklearn.model_selection import train_test_split from sklearn.cross_validation import cross_val_score from sklearn.datasets import load_iris from sklearn.ensemble import AdaBoostClassifier as AdaBoostClassifier pd.set_option('display.max_columns', None) data = pd.read_feather("../data/finaldata/fact_data.feather") tmp_data = data._get_numeric_data() tmp_data = tmp_data.fillna(0) tmp_data.head() train_data, test_data = train_test_split(tmp_data, test_size=0.5) bagging = BaggingClassifier(KNeighborsClassifier(), max_samples=0.5, max_features=0.5) X = train_data['run_duration_minutes'].values.reshape(-1, 1).astype('int') Y = train_data['number_of_alerts'].astype('int') clf = bagging.fit(X, Y) scores = cross_val_score(clf, X, Y) scores.mean() iris = load_iris() clf = AdaBoostClassifier(n_estimators=100) scores = cross_val_score(clf, iris.data, iris.target) scores.mean()
1,475
/python_code/수업/데이터마이닝/homework/Report9_clustering_서지희.ipynb
5a6111704ead828d07ea8dd58feae8e623054d3e
[]
no_license
muyaaho/study
https://github.com/muyaaho/study
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
154,418
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from sklearn.cluster import DBSCAN, KMeans from sklearn import cluster from sklearn.datasets import make_moons, make_circles from sklearn.preprocessing import StandardScaler import numpy as np import pandas as pd import matplotlib.pyplot as plt np.random.seed(715) X,y = make_moons(n_samples=100, noise=0.05) Xc, yc = make_circles(n_samples=1000, factor=0.5, noise=0.05) Xc = StandardScaler().fit_transform(Xc) X = StandardScaler().fit_transform(X) # + pycharm={"name": "#%%\n"} plt.figure(figsize=(12,12)) kmeans = KMeans(n_clusters=3, random_state=42).fit(Xc) y_pred = kmeans.fit_predict(Xc) plt.subplot(331) plt.scatter(Xc[:,0], Xc[:,1], c= y_pred) plt.title('k-means clustering') bandwidth = cluster.estimate_bandwidth(Xc, quantile=0.2, n_samples=500) ms = cluster.MeanShift(bandwidth = bandwidth, bin_seeding=True).fit(Xc) y_pred = ms.fit_predict(Xc) plt.subplot(332) plt.scatter(Xc[:,0], Xc[:,1], c = y_pred) plt.title('MeanShift clustering') dbscan=DBSCAN(eps = 0.2, min_samples=5).fit(Xc) y_pred = dbscan.fit_predict(Xc) plt.subplot(333) plt.scatter(Xc[:,0], Xc[:,1], c = y_pred) plt.title('DBSCAN clustering') # + pycharm={"name": "#%%\n"} plt.figure(figsize=(12,12)) kmeans = KMeans(n_clusters=3, random_state=42).fit(X) y_pred = kmeans.fit_predict(X) plt.subplot(331) plt.scatter(X[:,0], X[:,1], c = y_pred) plt.title('k-means clustering') bandwidth = cluster.estimate_bandwidth(X, quantile=0.2, n_samples=500) ms=cluster.MeanShift(bandwidth=bandwidth, bin_seeding=True).fit(X) y_pred = ms.fit_predict(X) plt.subplot(332) plt.scatter(X[:,0], X[:,1], c = y_pred) plt.title('MeanShift clustering') dbscan=DBSCAN(eps = 0.2, min_samples=5).fit(X) y_pred = dbscan.fit_predict(X) plt.subplot(333) plt.scatter(X[:,0], X[:,1], c = y_pred) plt.title('DBSCAN clustering') plt.suptitle('noisy_circles dataset clustering', fontsize = 14) # + pycharm={"name": "#%%\n"} r(b): characters = [] for l in b: for m in l: if m.lower() in characters: pass else: characters.append(m.lower()) return sorted(characters) def output_char(b): characters = [] for l in b: for m in l: if m in characters: pass else: characters.append(m) return sorted(characters) input_characters = input_char(splited_x) target_characters = output_char(splited_y) # - len(input_characters), len(target_characters) input_characters[:5] target_characters[:5] num_of_encoder_tokens, num_of_decoder_tokens = len(input_characters), len(target_characters) # + def find_max(x): list_length = [] for k in x: length = len(k) list_length.append(length) return max(list_length) find_max(splited_x), find_max(splited_y) # - ratio = 10/177 ratio # + dict_input_char = {} ratio = 0.0565 update = ratio #zero is for input-target 0 for k in input_characters: dict_input_char[k] = update update = update+ ratio # - output_ratio = 10/num_of_decoder_tokens output_ratio # + dict_output_char = {} output_ratio = 0.0541 update = output_ratio #zero is for input-target 0 for k in target_characters: dict_output_char[k] = update update = update+ output_ratio # + # pad the data def padding(x, val): copy_x = x.copy() for i,k in enumerate(x): if len(k) <val: dif = val- len(k) list_diff = [' ']*dif for k in range(dif): copy_x[i].append(' ') else: pass return copy_x copy_splited_x = padding(splited_x,5) # - copy_splited_y = padding(splited_y,7) # + # input data is of shape 5 # output data is of shape 7 dict_input_char[' '] = 0.000 dict_output_char[' '] = 0.000 # + def encode_input_data(data): for k,l in enumerate(data): for i,j in enumerate(l): data[k][i] = dict_input_char[j.lower()] return data encoded_input_data = encode_input_data(copy_splited_x) # + #encoded_input_data # + def encode_target_data(data): for k,l in enumerate(data): for i,j in enumerate(l): data[k][i] = dict_output_char[j] return data encoded_target_data = encode_target_data(copy_splited_y) # - encoded_input_data[:5] encoded_target_data[:5] # # modelling # + #create dataframe def create_frame(x,y, index = 0): a,b,c,d,e, target = [],[],[],[],[],[] for l,m in zip(x,y): a.append(l[0]) b.append(l[1]) c.append(l[2]) d.append(l[3]) e.append(l[4]) target.append(m[index]) dict_frame = {'a':a,'b':b,'c':c,'d':d,'e':e} y = {'target': target} return pd.DataFrame(dict_frame), np.array(target) x0, y0 = create_frame(encoded_input_data,encoded_target_data, index = 0) x1, y1 = create_frame(encoded_input_data,encoded_target_data, index = 1) x2, y2 = create_frame(encoded_input_data,encoded_target_data, index = 2) x3, y3 = create_frame(encoded_input_data,encoded_target_data, index = 3) x4, y4 = create_frame(encoded_input_data,encoded_target_data, index = 4) x5, y5 = create_frame(encoded_input_data,encoded_target_data, index = 5) x6, y6 = create_frame(encoded_input_data,encoded_target_data, index = 6) # - from sklearn.model_selection import train_test_split x_train0, x_test0, y_train0, y_test0 = train_test_split(x0, y0, test_size = 0.1, random_state = 42) # + from sklearn.tree import DecisionTreeRegressor lr0 = DecisionTreeRegressor() lr0.fit(x_train0, y_train0) print('The training accuracy is: ',lr0.score(x_train0, y_train0)) print('The test accuracy is: ',lr0.score(x_test0, y_test0)) # - x_train1, x_test1, y_train1, y_test1 = train_test_split(x1, y1, test_size = 0.2, random_state = 42) x_train2, x_test2, y_train2, y_test2 = train_test_split(x2, y2, test_size = 0.2, random_state = 42) x_train3, x_test3, y_train3, y_test3 = train_test_split(x3, y3, test_size = 0.2, random_state = 42) x_train4, x_test4, y_train4, y_test4 = train_test_split(x4, y4, test_size = 0.2, random_state = 42) x_train5, x_test5, y_train5, y_test5 = train_test_split(x5, y5, test_size = 0.2, random_state = 42) x_train6, x_test6, y_train6, y_test6 = train_test_split(x6, y6, test_size = 0.2, random_state = 42) # + lr1 = DecisionTreeRegressor() lr1.fit(x_train1, y_train1) print('The training accuracy is: ',lr1.score(x_train1, y_train1)) print('The test accuracy is: ',lr1.score(x_test1, y_test1)) # + lr2 = DecisionTreeRegressor() lr2.fit(x_train2, y_train2) print('The training accuracy is: ',lr2.score(x_train2, y_train2)) print('The test accuracy is: ',lr2.score(x_test2, y_test2)) # + lr3 = DecisionTreeRegressor() lr3.fit(x_train3, y_train3) print('The training accuracy is: ',lr3.score(x_train3, y_train3)) print('The test accuracy is: ',lr3.score(x_test3, y_test3)) # + lr4 = DecisionTreeRegressor() lr4.fit(x_train4, y_train4) print('The training accuracy is: ',lr4.score(x_train4, y_train4)) print('The test accuracy is: ',lr4.score(x_test4, y_test4)) # + lr5 = DecisionTreeRegressor() lr5.fit(x_train5, y_train5) print('The training accuracy is: ',lr5.score(x_train5, y_train5)) print('The test accuracy is: ',lr5.score(x_test5, y_test5)) # + lr6 = DecisionTreeRegressor() lr6.fit(x_train6, y_train6) print('The training accuracy is: ',lr6.score(x_train6, y_train6)) print('The test accuracy is: ',lr6.score(x_test6, y_test6)) # - import joblib, os current_dir = os.getcwd() save_directory = os.path.join(current_dir, 'models/dtr/') if not os.path.exists(save_directory): os.makedirs(save_directory) joblib.dump(lr0, save_directory+'/lr0dtr.pkl') joblib.dump(lr1, save_directory+'/lr1dtr.pkl') joblib.dump(lr2, save_directory+'/lr2dtr.pkl') joblib.dump(lr3, save_directory+'/lr3dtr.pkl') joblib.dump(lr4, save_directory+'/lr4dtr.pkl') joblib.dump(lr5, save_directory+'/lr5dtr.pkl') joblib.dump(lr6, save_directory+'/lr6dtr.pkl') # write the dict_input_char joblib.dump(dict_input_char, save_directory+'/dict_input_char.pkl') joblib.dump(dict_output_char, save_directory+'/dict_output_char.pkl') # # create pipeline os.getcwd() import pickle #os.chdir(os.getcwd() +'/models/dtr') input_dict = pickle.load(open('dict_input_char.pkl', 'rb')) output_dict = pickle.load(open('dict_output_char.pkl','rb')) lr0 = pickle.load(open('lr0dtr.pkl', 'rb')) lr1 = pickle.load(open('lr1dtr.pkl', 'rb')) lr2 = pickle.load(open('lr2dtr.pkl', 'rb')) lr3 = pickle.load(open('lr3dtr.pkl', 'rb')) lr4 = pickle.load(open('lr4dtr.pkl', 'rb')) lr5 = pickle.load(open('lr5dtr.pkl', 'rb')) lr6 = pickle.load(open('lr6dtr.pkl', 'rb')) # + def encode_input_data(data): data = data.split() data_length = len(data) if data_length <5: diff = 5 - data_length for k in range(diff): data.append(' ') for i,j in enumerate(data): data[i] = dict_input_char[j.lower()] return data input_test = 'I did it' encoded_input_data = encode_input_data(input_test) encoded_input_data # + def predict_encoder(data): data = np.array(data).reshape(1,-1) p0 = lr0.predict(data) p1 = lr1.predict(data) p2 = lr2.predict(data) p3 = lr3.predict(data) p4 = lr4.predict(data) p5 = lr5.predict(data) p6 = lr6.predict(data) return [float(p0),float(p1),float(p2),float(p3),float(p4),float(p5),float(p6)] prediction = predict_encoder(encoded_input_data) prediction # - list(dict_output_char.values())[0] def three_dp(val): val_str = str(val) val_length = len(val_str) if val_length <5: diff = 5 - val_length for k in range(diff): val_str = val_str +'0' cut_val = val_str else: cut_val = val_str[:5] return float(cut_val) three_dp(3.10000) # + def get_value(value): va = three_dp(0.0541/2) upper_range, lower_range = value +va, value - va key = '' for i,k in zip(list(dict_output_char.keys()), dict_output_char.values()): if k < three_dp(value) and k > lower_range: key = i elif k > three_dp(value) and k < upper_range: key = i else: continue return key get_value(8.980600000000004) # + def model_decoder(data): result = [] for k in data: result.append(get_value(k)) output = '' for k in result: output += ' ' +k return output model_decoder(prediction) # + import os, sys import numpy as np def encode_input_data(data): data = data.split() data_length = len(data) if data_length <5: diff = 5 - data_length for k in range(diff): data.append(' ') for i,j in enumerate(data): data[i] = dict_input_char[j.lower()] return data def predict_encoder(data): data = np.array(data).reshape(1,-1) p0 = lr0.predict(data) p1 = lr1.predict(data) p2 = lr2.predict(data) p3 = lr3.predict(data) p4 = lr4.predict(data) p5 = lr5.predict(data) p6 = lr6.predict(data) return [float(p0),float(p1),float(p2),float(p3),float(p4),float(p5),float(p6)] def three_dp(val): val_str = str(val) val_length = len(val_str) if val_length <5: diff = 5 - val_length for k in range(diff): val_str = val_str +'0' cut_val = val_str else: cut_val = val_str[:5] return float(cut_val) def get_value(value): va = three_dp(0.0541/2) upper_range, lower_range = value +va, value - va key = '' for i,k in zip(list(dict_output_char.keys()), dict_output_char.values()): if k < three_dp(value) and k > lower_range: key = i elif k > three_dp(value) and k < upper_range: key = i else: continue return key def model_decoder(data): result = [] for k in data: result.append(get_value(k)) output = '' for k in result: output += ' ' +k return output import pickle #os.chdir(os.getcwd() +'/models/dtr') #input_dict = pickle.load(open('dict_input_char.pkl', 'rb')) #output_dict = pickle.load(open('dict_output_char.pkl','rb')) #lr0 = pickle.load(open('lr0dtr.pkl', 'rb')) #lr1 = pickle.load(open('lr1dtr.pkl', 'rb')) #lr2 = pickle.load(open('lr2dtr.pkl', 'rb')) #lr3 = pickle.load(open('lr3dtr.pkl', 'rb')) #lr4 = pickle.load(open('lr4dtr.pkl', 'rb')) #lr5 = pickle.load(open('lr5dtr.pkl', 'rb')) #lr6 = pickle.load(open('lr6dtr.pkl', 'rb')) def main(string): val = '' encoded_input_data = encode_input_data(string) prediction = predict_encoder(encoded_input_data) val = model_decoder(prediction) return val if __name__== '__main__': string = 'He worked hard' result = main(string) print(result) # - x[15:20] y[15:20]
12,929
/3_analysis/1a_extract_info_single_run.ipynb
a68f90bea20cbe0731c9fbb5fd84dc6dbda0907f
[]
no_license
timmoon10/lbann_cosmogan
https://github.com/timmoon10/lbann_cosmogan
0
0
null
2020-08-27T22:57:03
2020-08-25T21:27:56
null
Jupyter Notebook
false
false
.py
12,708
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: v_py3 # language: python # name: v_jpt_py3 # --- # # Extract data from output files # # ### Code to extract timing information from output files of Lbann code # March 9, 2020 # + import numpy as np import matplotlib.pyplot as plt import pandas as pd import subprocess as sp import os import glob import itertools from ipywidgets import interact, interact_manual,fixed, SelectMultiple, RadioButtons # - # %matplotlib widget # ## Extract training times def f_extract_info(fname): ''' Module to extract information from out.log files of Lbann training Reads in file name ''' strg_lst=['objective','d_real','d_fake','gen','run time','mini-batch'] keys=['training_'+strg for strg in strg_lst] dict1={} for category in ['training','validation']: for strg in strg_lst: key=category+'_'+strg cmd='grep "{0}" {1} | grep "{2}"'.format(category,fname,strg) # print(cmd) op1=sp.check_output(cmd,shell=True).decode('utf-8').split('\n') obj=np.array([strg.split(':')[-1] for strg in op1 if strg]) dict1[key]=obj df=pd.DataFrame([]) key_lst=['training_objective', 'training_d_real', 'training_d_fake', 'training_gen', 'validation_objective', 'validation_d_real', 'validation_d_fake', 'validation_gen'] col_list=['train_obj','train_dreal','train_dfake','train_gen','val_obj','val_dreal','val_dfake','val_gen'] for col,key in zip(col_list,key_lst): df[col]=dict1[key].astype(np.float) ### Need to remove the trailing 's' in the timings for col,key in zip(['train_time','val_time'],['training_run time','validation_run time']): df[col]=np.array([i[:-1] for i in dict1[key]]).astype(np.float) for col,key in zip(['train_batch_stats','val_batch_stats'],['training_mini-batch','validation_mini-batch']): df[col]=dict1[key] return df # + ### Extract information from log file parent_dir='/global/cscratch1/sd/vpa/proj/cosmogan/results_dir/128square/' fldr_name='20200811_195351_bsize256_8gpurun_noconvbrelu' main_dir=parent_dir+'{0}/dump_outs/trainer0/model0/'.format(fldr_name) print(main_dir) strg=parent_dir+'{0}/out.log'.format(fldr_name) fname=glob.glob(strg)[0] print(fname) df=f_extract_info(fname) # - # df.columns df.head() # + def f_plot(df,col_list=['train_obj']): ''' Plot multiple columns of the dataframe ''' plt.figure() marker_lst=('o','*','H','D','.','x') marker=itertools.cycle(marker_lst) for col in col_list: plt.plot(df[col],linestyle='',marker=next(marker),label=col) plt.legend() plt.xlabel('Epoch') f_plot(df,col_list=['train_obj','train_dfake','train_dreal','train_gen']) # plt.savefig('fig2.png') # - ### Compare different quantities col_list=['train_obj', 'train_dreal', 'train_dfake', 'train_gen', 'val_obj', 'val_dreal', 'val_dfake', 'val_gen', 'train_time', 'val_time'] interact_manual(f_plot,col_list=SelectMultiple(options=col_list),df=fixed(df))
3,227
/lectures/lecture9/rng.ipynb
28d719bd408dfbde3d1003a5da8e0c0b303b3d52
[ "MIT" ]
permissive
mybirth0407/M1399_000100-2021spring
https://github.com/mybirth0407/M1399_000100-2021spring
0
0
null
null
null
null
Jupyter Notebook
false
false
.r
41,459
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: R # language: R # name: ir # --- sessionInfo() library(scatterplot3d) library(ggplot2) # # Random number generation # # ## Uniform random number generation # # ### Goal # # To generate $U_i \stackrel{iid}{\sim} \text{unif}(0, 1)$. # # * Basis for all other random number generation # * **Fact: NO RANDOM NUMBER IN COMPUTER** --- only look random (PSEUDO-RANDOM) # # ### Congruential generator # # \begin{align*} # x_{i+1} &= a x_i \mod m, \quad i=1, 2, \dotsc \\ # u_i &= x_i / m. # \end{align*} # # * Modulus $m$ is a large prime number. # * Multiplier $a$ is a positive integer between 2 and $m-1$. # * Map $f: x \mapsto ax \mod m$ maps $\{1, \dotsc, m-1\}$ onto itself and is one-to-one: # - Suppose $x\in\{1, \dotsc, m-1\}$. Then $f(x) \neq 0$ since $a$ and $m$ are relatively prime hence $ax$ is not a multiple of $m$. Thus $f$ maps $\{1, , \dotsc, m-1\}$ to $\{1, , \dotsc, m-1\}$. # - If $ay = ax \mod m$, then $a(y - x) = m k$ for some integer $k$. Since $a$ and $m$ are relatively prime, $y - x = m l$ for some interger $l$. That is, $x = y \mod m$. Hence map $f$ is one-to-one. # - Since $f$ maps $\{1, , \dotsc, m-1\}$ to $\{1, , \dotsc, m-1\}$ and one-to-one, $f$ is also onto. # # * Note # \begin{align*} # x_1 &= a x_0 \mod m \\ # x_2 &= a x_1 \mod m = a^2 x_0 \mod m \\ # x_3 &= a x_2 \mod m = a^3 x_0 \mod m \\ # & \vdots \\ # x_n &= a x_{n-1} \mod m = a^n x_0 \mod m # \end{align*} # Hence if $a^n = 1 \mod m$ for some $n$, then # $$ # x_n = x_0 \mod m # $$ # and the (pseudo)random number generator repeats the sequence. The number $n$ is called the **period** of the RNG, and $x_0$ is called the **seed**. # set.seed(2020) # set 2020th seed; does not mean x_0 = 2020 runif(5) set.seed(2020) # same seed results in same "random" sequence runif(5) # * Primitive root of unity # - Fermat's little theorem: $a^{m} = a \mod m$. # - Since $a$ is not divisible by $m$, we have $a^{m-1} = 1 \mod m$. # - Thus the period $n$ satisfies $n \le m - 1$. # - *Primitive* root of unity: $a$ such that $n = m - 1$ # - If $a$ is primitive, then $x_1, x_2, \dotsc, x_{m-1}$ is a permutation of $\{1, 2, \dotsc, m-1\}$. # - For $m=2^{31} - 1$ (Mersenne prime), $a=7^5 = 16807$ is primitive, leading to the period of $2^{31} - 2$ = `2,147,483,646`. This RNG was used in early versions of MATLAB (up to v4). # # Good RNGs should have long periods, and should give samples which appear to be drawn from a uniform distribution. If the size of the sample is much less than the period of the RNG, then the sample should appear random. # ### Autocorrelation # # * Ideally, a pseudorandom number sequence should look i.i.d. If we take the first $p$ values to form an $p$-dimensional vector, the second $p$ values to form a second $p$-dimensional vector, etc, then these vectors should fill the $p$-dimensional hypercube uniformly. # # * However, by construction the sequence generated by congruential generators depends on the previous value, hence tends to be correlated. # # * It is known that congruential generators tend to give $p$-dimensional vectors that concentrate lower-dimensional hyperplanes, for some $p$. # IBM System/360 RANDU generator # a = 2^16 + 3 = 65539 # m = 2^31 n <- 2000 a <- 65539 m <- 2^31 u <- vector(length=n) u[1] <- 1 for (i in seq_len(n-1)) { u[i+1] <- a * u[i] %% m } scatterplot3d(u[1:(n-2)], u[2:(n-1)], u[3:n], angle=160) # Early MATLAB RNG # a = 7^5 # m = 2^31 - 1 n <- 2000 a2 <- 7^5 m2 <- 2^31 - 1 v <- vector(length=n) v[1] <- 1 for (i in seq_len(n-1)) { v[i+1] <- a2 * v[i] %% m2 } scatterplot3d(v[1:(n-2)], v[2:(n-1)], v[3:n], angle=20) # A simple modification is to introduce *shuffling* in the sequence, which we won't cover in detail. # ### R's RNG # # * R uses the Mersenne-Twister as the default RNG. This RNG was developed by Matsumoto and Nishimura in 1998, and is the first algorithm whose period ($2^{19937} - 1$) exceeds the number of electron spin changes since the creation of the Universe ($10^{6000}$ against $10^{120}$)! # # * Mersenne-Twister guarantees 623 consecutive dimensions to be equidistributed (over the whole period). RNGkind() # ## Transformation methods # # From now on, we assume the the problem of generating uniform random numbers has been solved for practical purposes. # # ### Inverse CDF method # # For a random variable $X$, let $F$ be its cumulative distribution function (CDF), that is, $F(x) = P(X \le x)$. # Recall that $F$ is right-continuous and nondecreasing. Also, if $F$ is strictrly increasing, random variable $F(X)$ is uniformly distributed on $[0, 1]$. Below, we generalize this result. # # The inverse CDF of $X$ is defined as # $$ # F^{-1}(u) = \inf\{x: F(x) \ge u\}, # , # $$ # which coincides the usual inverse of $F$ if $F$ is strictly increasing. # # **Proposition 1**. Let $X$ be a random variable with CDF $F$. Then the following holds. # 1. If $F$ is continuous, then $U = F(X)$ is a uniform random variable on $[0, 1]$. # 2. If $F$ is not continous, then $P[F(X) \le y] \le y$ for all $y \in [0, 1]$. # 3. If $U$ is uniform on $[0, 1]$, then $F^{-1}(U)$ has CDF $F$. # # *Proof*. # * Part 1: We will show that # $$ # P[F(X) \le F(t)] = F(t) # \tag{*} # $$ # for any $t$. Suppose for now this is true. # # Let $u \in (0, 1)$. Then by continuity of $F$, there is $y$ such that $F(y) = u$. By (*), # $$ # P[F(X) \le u] = P[F(X) \le F(y)] = F(y) = u. # $$ # # * Part 3: It suffices to show that $\{F^{-1}(U) \le t\} = \{U \le F(t)\}$. If $F^{-1}(u)=\inf\{x: F(x) \ge u\} \le t$, then by the monotonicity and right-continuity of $F$, the set $\{x: F(x) \ge u\}$ is an half-closed interval containing its left endpoint, which is $F^{-1}(u)$. Hence $F(F^{-1}(u)) \ge u$. Since $F^{-1}(u) \le t$, again by monotonicity of $F$, it follows that $u \le F(F^{-1}(u)) \le F(t)$. Conversely, if $u \le F(t)$, then by definition $F^{-1}(u) \le t$. # # * Part 2: by part 3, $X\stackrel{d}{=}F^{-1}(U)$, where $U$ is uniform on $[0, 1]$. Since $x=F^{-1}(u)$ implies $u \le F(x)$, # $$ # P[F(X) \le y] \le P(U \le y) = y. # $$ # # * It remains to show (*). Monotonicity of $F$ yields $\{X > t\} \subset \{F(X) \ge F(t)\}$. Hence $\{X > t \} \cap \{F(X) < F(t) \} = \emptyset$. Likewise $\{X \le t\} \cap \{F(X) > F(t)\} = \emptyset\}$. Then, # \begin{align*} # \{X > t\} \cap \{F(X) \le F(t)\} &= # [\{X > t\} \cap \{F(X) < F(t) \}] \cup [\{X > t\} \cap \{F(X) = F(t)\}] \\ # &= \{X > t\} \cap \{F(X) = F(t)\} # \\ # \{X \le t\} \cap \{F(X) \le F(t)\} &= \{X \le t\} \cap \{F(X) > F(t)\}^c # = \{X \le t\} # \end{align*} # So, # $$ # \{F(X) \le F(t) \} = \{X \le t\} \cup [\{X > t\} \cap \{F(X) = F(t)\}] =: A \cup B # . # $$ # Obviously events $A$ and $B$ are disjoint. However, event $B$ corresponds to the values $x$ of $X$ with which $F(x)$ is constant. Hence $P(B)=0$. Therefore, # $$ # P[F(X) \le F(t)] = P(A) + P(B) = P(A) = P(X \le t) = F(t). # $$ # #### Exponential distribution # # * CDF $F(x) = 1 - e^{-\lambda x}$ yields $F^{-1}(u) = -\frac{1}{\lambda}\log u$ on $(0, 1)$. # Exp(1) n <- 1000 u <- runif(n) x <- -log(u) expdata <- data.frame(x) plt <- ggplot(expdata, aes(x=x)) + geom_histogram(binwidth=0.25, fill="white", color="black") + geom_density(aes(y=0.25 * ..count..), color="purple") print(plt) # #### Cauchy distribution # # From the density of the Cauchy distribution # $$ # f(x) = \frac{\beta}{\pi(\beta^2 + (x-\mu)^2)}, # $$ # its CDF is $F(x) = \frac{1}{2} + \frac{1}{\pi}\tan^{-1}\left(\frac{x-\mu}{\beta}\right)$. Thus # $$ # F^{-1}(u) = \mu + \beta\tan(\pi u - \pi/2) = \mu - \frac{\beta}{\tan(\pi u)} # . # $$ # standard Cauchy (beta=1, mu=0) n <- 1000 u <- runif(n) x <- -1/tan(pi * u) #hist(x, breaks=40) cauchydata <- data.frame(x) plt <- ggplot(cauchydata, aes(x=x)) + geom_histogram(binwidth=10, fill="white", color="black") + geom_density(aes(y=10 * ..count..), color="purple") + xlim(-200, 200) print(plt) # #### Discrete uniform distribution # # $X \sim \text{unif}(\{1, 2, \dotsc, k\})$. It is easy to verify $F(x) = \frac{1}{k}\lfloor x \rfloor$ for $x \in [0, n]$ and $F^{-1}(u)=\lceil ku \rceil$. # k <- 10 n <- 1000 u <- runif(n) x <- ceiling(k * u) table(x) # #### Geometric distribution # # If $X \sim \text{geom}(p)$, then its probability mass functin $p(x) = (1-p)^{x-1}p$. # # For $Y \sim \text{Exp}(\lambda)$, # \begin{align*} # P(\lceil Y \rceil = k) &= P(k-1 < Y \le k) = F_Y(k) - F_Y(k-1) = (1 - e^{-\lambda k}) - (1 - e^{-\lambda(k-1)}) \\ # &= e^{-\lambda(k-1)}(1 - e^{-\lambda}) \\ # &- (1 - p)^{k-1} p # \end{align*} # if $\lambda$ satisfies $p = 1 - e^{-\lambda}$, or $\lambda = -\log(1-p)$. # # For this $\lambda$, $X = \lceil Y \rceil = \lceil -\frac{1}{\lambda}\log U \rceil = \lceil \frac{\log U}{\log(1-p)}\rceil$. gengeom <- function(p, nsamp=1) { u <- runif(nsamp) y <- log(u) / log(1 - p) ceiling(y) } nsamp <- 1000 p <- 0.3 x <- gengeom(p, nsamp) geomdata <- data.frame(x) plt <- ggplot(geomdata, aes(x=x)) + geom_histogram(binwidth=0.25) print(plt) # ### Normal random numbers # # For $X \sim N(0, 1)$, inverse CDF $\Phi^{-1}$ does not have a closed form. # # #### Box-Muller # # Generates $X, Y \stackrel{iid}{\sim} N(0, 1)$. # # Transforms the random Cartesian coordinates $(X, Y)$ to polar coorinates $(R, \Theta)$. Since $(X, Y)=(R\cos\Theta, R\sin\Theta)$, # $$ # \iint f_{XY}(x, y)dxdy = \frac{1}{2\pi}\exp(-\frac{x^2 + y^2}{2})dxdy # = \iint \frac{1}{2\pi}\exp(-\frac{r^2}{2})rdrd\theta # . # $$ # # Hence $R$ has density $f_R(r) = r\exp(-\frac{r^2}{2})$ and $\Theta$ is uninform on $[0, 2\pi]$. Since # $$ # P(R > \rho) = P(R^2 > \rho^2) = \int_\rho^{\infty} r\exp(-\frac{r^2}{2})dr = \exp(-\frac{1}{2}\rho^2), # $$ # random variable $R^2$ is exponentially distributed with $\lambda = 1/2$. # # Thus for independent $U, V \sim \text{unif}(0, 1)$, set # $$ # R = (-2\log U)^{1/2}, \quad \Theta = 2\pi V # . # $$ # Then $(X, Y) = (R\cos\Theta, R\sin\Theta)$ are independently $N(0, 1)$. boxmuller <- function(nsamp) { n <- ceiling(nsamp / 2) u <- runif(n) v <- runif(n) r <- sqrt(-2 * log(u)) theta <- 2 * pi * v x <- r * cos(theta) y <- r * sin(theta) samp <- c(x, y) samp[seq_len(nsamp)] } #hist(c(x, y)) n <- 1000 normdata1 <- data.frame(x = boxmuller(n)) plt <- ggplot(normdata1, aes(x=x)) + geom_histogram(binwidth=0.25, fill="white", color="black") + geom_density(aes(y=0.25 * ..count..), color="purple") print(plt) # #### Marsaglia # # * From the Box-Muller transformation, we learned that if $R^2 \sim \text{Exp}(1/2)$ and $\Theta\sim\text{unif}(0, 2\pi)$ and they are independent, $(R\cos\Theta, R\sin\Theta)$ are i.i.d. standard normal. # # * On the other hand, if a random point $(U, V)$ is uniformly distributed on the unit disk, and let $(U, V) = (T\cos\Phi, T\sin\Phi)$, then # $$ # P(T^2 \le t^2, 0 \le \Phi \le \phi) = # \frac{t^2\phi}{2\pi}, \quad t^2 \le 1, ~ \phi \in (0, 2\pi) \\ # $$ # which means that $T^2 = U^2 + V^2 \sim \text{unif}(0, 1)$, $\Phi \sim \text{unif}(0, 2\pi)$, and they are independent. # # * Therefore, if we sample $(U, V)$ uniformly from the unit disk, and set $T = \sqrt{U^2 + V^2}$, $\cos\Phi = U/T$, $\sin\Phi=V/T$, then $T^2\sim\text{unif}(0,1)$ and $-2\log T^2$ is identically distributed to $R^2$, i.e., $\text{Exp}(1/2)$, and $\Phi$ is identically distributed to $\Theta$, i.e., uniform on $(0, 2\pi)$. Therefore, if we set $(X, Y)$ such that # $$ # X = \sqrt{-2\log T^2}\frac{U}{T}, # \quad # Y = \sqrt{-2\log T^2}\frac{V}{T}, # $$ # then $X, Y$ are i.i.d. standard normal. # # * One way to sample from the unit disk is to sample $(U, V)$ from $\text{unif}[-1, 1]\times\text{unif}[-1, 1]$, and discard the sample if $U^2 + V^2 > 1$ and resample (see acceptance-rejection sampling below). # # * Algorithm: # 1. $U, V \stackrel{iid}{\sim} \text{unif}[-1, 1]$; # 2. If $T = \sqrt{U^2 + V^2} > 1$, go to 1; # 3. Set $X = \sqrt{-2\log T^2}\frac{U}{T}$ and $Y = \sqrt{-2\log T^2}\frac{V}{T}$. # # * This method avoids the trigonometric function evaluations of the Box-Muller, but uses $4/\pi$ as many random pairs on average. marsaglia <- function(nsamp) { n <- ceiling(nsamp / 2) it <- 0 x <- numeric(n) y <- numeric(n) while (it < n) { u <- 2 * runif(1) - 1 v <- 2 * runif(1) - 1 tau <- sqrt(u^2 + v^2) if (tau > 1) next x[it] <- sqrt(-4 * log(tau)) * u / tau y[it] <- sqrt(-4 * log(tau)) * v / tau it <- it + 1 } samp <- c(x, y) samp[seq_len(nsamp)] } n <- 1000 normdata2 <- data.frame(x = marsaglia(n)) plt <- ggplot(normdata2, aes(x=x)) + geom_histogram(binwidth=0.25, fill="white", color="black") + geom_density(aes(y=0.25 * ..count..), color="purple") print(plt) # ## Random numbers by definition # # ### Bernoulli # # 1. Set the success probability $p$ # 2. Generate $U \sim \text{unif}(0, 1)$. # 3. Let $X = \mathbb{I}_{\{U \le p\}}$. # 4. Then $X \sim \text{Ber}(p)$. # # ### Binomial # # 1. Set the success probability $p$ # 2. Generate $n$ independent $X_i \sim \text{Ber}(p)$. # 3. Let $X_n = \sum_{i=1}^n X_i$. # 4. Then $X_n \sim B(n, p)$. # # ### Negative binomial # # 1. Set the success probability $p$ # 2. Generate $r$ independent $X_i \sim \text{Geom}(p)$. # 3. Let $X_r = \sum_{i=1}^r X_i$. # 4. Then $X_r \sim \text{NegBin}(r, p)$. # # ### Poisson # # 1. Generate $U_i \stackrel{iid}{\sim} \text{unif}(0, 1)$. # 2. Find $N$ such that $\prod_{i=1}^N U_i \ge e^{-\lambda} > \prod_{i=1}^{N+1} U_i$. # 3. Then $N \sim \text{Poi}(\lambda)$. # # - This is because $T_i = -\frac{1}{\lambda}\log U_i \stackrel{iid}{\sim} \text{Exp}(\lambda)$, # and is the waiting time between the $i-1$st and the $i$th event of the Poisson counting process $N(t) \sim \text{Poi}(\lambda t)$. # - That is, $N \sim \text{Poi}(\lambda) \iff T_1 + \dotsc + T_N \le 1 < T_1 + \dotsc + T_N + T_{N+1}$. ## Binomial random number generation genbin <- function(n, p) { u <- runif(n) x <- sum(u < p) } n <- 10; p <- 0.6 nsamp <- 1000 x <- numeric(nsamp) for (i in seq_len(nsamp)) { x[i] <- genbin(n, p) } bindata <- data.frame(x) plt <- ggplot(bindata, aes(x=x)) + geom_histogram(binwidth=0.25) print(plt) # Negative binomial random number generation gengeom <- function(p, nsamp=1) { u <- runif(nsamp) y <- log(u) / log(1 - p) ceiling(y) } nsamp <- 1000 p <- 0.6 r <- 5 x <- numeric(nsamp) for (i in seq_len(r)) { x <- x + gengeom(p, nsamp) } negbindata <- data.frame(x) plt <- ggplot(negbindata, aes(x=x)) + geom_histogram(binwidth=0.25) print(plt) # Poisson random number generation genpoi <- function(lam, maxiter=1000) { u_cum <- 1.0 k <- 0 while (u_cum > exp(-lam) && k < maxiter ) { u <- runif(1) u_cum <- u_cum * u k <- k + 1 } k } lam <- 3 # Poisson rate nsamp <- 1000 x <- numeric(nsamp) for (i in seq_len(nsamp)) { x[i] <- genpoi(lam) } poidata <- data.frame(x) plt <- ggplot(poidata, aes(x=x)) + geom_histogram(binwidth=0.25) print(plt) # ### Chi-square # # 1. Generate $Z_1, \dotsc, Z_{\nu} \stackrel{iid}{\sim} N(0, 1)$. # 2. Let $X_{\nu} = \sum_{i=1}^{\nu} Z_i^2$. # 3. Then $X_{\nu} \sim \chi^2(\nu)$. # # Alternatively, for even $\nu$: # # 1. Generate $U_i \stackrel{iid}{\sim} \text{unif}(0, 1)$. # 2. Let $X_{\nu} = -2\log(\prod_{i=1}^{\nu/2} U_i)$. # 3. Then $X_{\nu} \sim \chi^2(\nu)$. # # This is because $\chi^2(\nu) = \text{Gamma}(\nu/2, 2)$, where 2 is the scale parameter. # # ### Student's $t$ # # 1. Generate $Z \sim N(0, 1)$ and $X \sim \chi^2(\nu)$ independently. # 2. Let $T = Z / \sqrt{X/\nu}$. # 3. Then $T \sim t(\nu)$. # # ### $F$ # # 1. Generate $X_1 \sim \chi^2(\nu_1)$ and $X_2 \sim \chi^2(\nu_2)$ independently. # 2. Let $$F = \frac{X_1/\nu_1}{X_2/\nu_2}.$$ # 3. The $F \sim F(\nu_1, \nu_2)$. ## chi-square random number generation genchisq1 <- function(nsamp, nu) { z <- matrix(rnorm(nsamp * nu), nrow=nsamp) rowSums(z^2) } nu <- 6 n <- 1000 chisqdata1 <- data.frame(x = genchisq(n, nu)) plt <- ggplot(chisqdata1, aes(x=x)) + geom_histogram(binwidth=0.25, fill="white", color="black") + geom_density(aes(y=0.25 * ..count..), color="purple") print(plt) ## chi-square random number generation 2 genchisq2 <- function(nsamp, nu) { u <- matrix(runif(nsamp * nu / 2), nrow=nsamp) -2 * log(apply(u, 1, prod) ) } nu <- 6 n <- 1000 chisqdata2 <- data.frame(x = genchisq2(n, nu)) plt <- ggplot(chisqdata2, aes(x=x)) + geom_histogram(binwidth=0.25, fill="white", color="black") + geom_density(aes(y=0.25 * ..count..), color="purple") print(plt) ## Student's t random number generation gent <- function(nsamp, nu) { z <- rnorm(nsamp) chisq <- genchisq1(nsamp, nu) trv <- z / sqrt(chisq / nu) } nu <- 6 n <- 1000 tdata <- data.frame(x = gent(n, nu)) plt <- ggplot(tdata, aes(x=x)) + geom_histogram(binwidth=0.25, fill="white", color="black") + geom_density(aes(y=0.25 * ..count..), color="purple") print(plt) # F random number generation genF <- function(nsamp, nu1, nu2) { chisq1 <- genchisq1(nsamp, nu1) chisq2 <- genchisq1(nsamp, nu2) Frv <- chisq1 / nu1 / chisq2 * nu2 } nu1 <- 10; nu2 <- 6 n <- 1000 Fdata <- data.frame(x = genF(n, nu1, nu2)) plt <- ggplot(Fdata, aes(x=x)) + geom_histogram(binwidth=0.25, fill="white", color="black") + geom_density(aes(y=0.25 * ..count..), color="purple") print(plt) # ## Acceptance-rejection sampling (or just rejection sampling) # # Suppose we want to sample from a distribution with complicated density $f(x)$. It is not easy to use the above method since neither the cdf nor its inverse is analytically available. # # John von Neumann, while working on the Mahanttan Project, suggested the following: # # 1. Find the "envelope" of $f$, i.e., a simple density $g(x)$ such that # $$ # f(x) \le c g(x) =: h(x) # $$ # for all $x$, for some constant $c > 0$. # # 2. Sample a random variable $X$ distributed according to $g$. # # 3. Accept $X$ as a representative of $f$ if # - $U \le \displaystyle\frac{f(X)}{h(X)}$, # - where $U$ is a uniform random variable on $[0, 1]$ drawn independently. # # 4. Reject $X$ otherwise. Go to Line 2. # ### Why AR sampling works? # # **Definition** (Body of a function). For a nonnegative, integrable function $f$, its body is defined and denoted by # $$ # B_f = \{(x, y): 0 \le y \le f(x) \}. # $$ # # - Thus the volume $B_f$ is $|B_f| = \int_{-\infty}^{\infty}f(x)dx = \int_{-\infty}^{\infty}\int_0^{f(x)}1dydx$. # # **Theorem 1**. Supose random variable $X$ has density $g$, $U\sim\text{unif}[0, 1]$ is independent of $X$, and there exists $c > 0$ such that $f(x) \le c g(x)$ for all $x$. Then, the random vector $(X, cg(X)U)$ is uniformly distributed over the body $B_{cg}$ of $cg$. # - Theorem 1 states that the AR sampling scheme uniformly samples from $B_{cg}=B_h$. # - The sample is accepted if and only if $0 \le h(X)U = cg(X)U = Y \le f(X)$, i.e., $(X, Y) \in B_f$. The conditional density of $(X, Y)$ given $\{(X, Y) \in B_f\}$ is merely # $$ # \frac{|B_{cg}|^{-1}\mathbb{I}_{B_{cg}}(x,y)}{P(B_f)}\mathbb{I}_{B_f}(x,y) # = \frac{|B_{cg}|^{-1}\mathbb{I}_{B_{cg}}(x,y)}{|B_f|/|B_{cg}|}\mathbb{I}_{B_f}(x,y) # = \frac{1}{|B_f|}\mathbb{I}_{B_f}(x,y) # . # $$ # That is, $(X,Y)|\{(X, Y)\in B_f\} \sim \text{unif}(B_f)$. # + This means the accepted sample $(X, Y)$ is drawn according to $\text{unif}(B_f)$. # - Now the marginal density of $X$ is $f$. To see this, note when $(X, Y) \sim \text{unif}(B_f)$, # \begin{align*} # P(X \in A) &= \frac{|B_{\bar{A}}|}{|B_f|}, # \quad # \bar{A} = B_f \cap \{(x, y): x\in A\} = \{(x, y): x \in A, ~ 0 \le y \le f(x)\}. # \\ # &= \frac{\int_A f(x)dx}{1}, \quad \because |B_f| = \int_{-\infty}^{\infty}f(x)dx = 1 # \\ # &= \int_A f(x)dx. # \end{align*} # # - Thus sample $X$ is drawn according to $f$! # # - The total acceptance ratio is # \begin{align*} # P\left(U \le \frac{f(X)}{cg(X)}\right) # &= \int_{-\infty}^{\infty}\int_{0}^{f(x)/[cg(x)]} 1\cdot g(x) du dx \\ # &= \int_{-\infty}^{\infty} g(x) \int_{0}^{f(x)/[cg(x)]} 1 du dx \\ # &= \int_{-\infty}^{\infty} g(x) \frac{f(x)}{cg(x)} dx \\ # &= \int_{-\infty}^{\infty} \frac{f(x)}{c}dx \\ # &= \frac{1}{c} \int_{-\infty}^{\infty} f(x) dx \\ # &= \frac{1}{c} # . # \end{align*} # #### Proof of Theorem 1 # # We want to show that the joint density of $(X, cg(X)U)$ is # $$\frac{1}{|B_{cg}|}\mathbb{I}_{B_{cg}}(x, y).$$ # # Let $Y = cg(X)U$. Then $Y|\{X=x\} = cg(x) U \sim \text{unif}[0, cg(x)]$. That is, the conditional density of $Y$ given $X$ is # $$ # p_{Y|X}(y|x) = \frac{1}{cg(x)}\mathbb{I}_{[0, cg(x)]}(y). # $$ # By construction, the marginal density of $X$ is given by $p_X(x) = g(x)$. Therefore the joint density of $(X, Y)$ is # \begin{align*} # p_{XY}(x, y) &= p_X(x)p_{Y|X}(y|x) = \frac{1}{c}\mathbb{I}_{[0, cg(x)]}(y) \\ # &= \begin{cases} 1/c, & \text{if } 0 \le y \le c g(x), \\ # 0, & \text{otherwise}. \end{cases} # \\ # &= \frac{1}{c}\mathbb{I}_{B_{cg}}(x,y). # \end{align*} # Now since # $$ # 1 = \int_{-\infty}^{\infty}\int_0^{cg(x)}\frac{1}{c}dydx = \frac{1}{c}|B_{cg}|, # $$ # we have # $\frac{1}{|B_{cg}|}\mathbb{I}_{B_{cg}}(x, y)$ # as desired. # ### Example: Marsaglia # # > One way to sample from the unit disk is to sample $(U, V)$ from $\text{unif}[-1, 1]\times\text{unif}[-1, 1]$, and discard the sample if $U^2 + V^2 > 1$ and resample. # # We have $X=(U, V)$. # # * Target density: $f(u, v) = \frac{1}{\pi}\mathbb{I}_{\{u^2+v^2<1\}}(u, v)$ (uniform from unit disc) # * Sampling density: $g(u, v) = \frac{1}{4}\mathbb{I}_{\{|u|<1, |v|<1\}}(u, v)$ (uniform from $[-1, 1]^2$) # * Envelope: $h(u, v) = \frac{1}{\pi}\mathbb{I}_{|u|<1, |v|<1}(u, v) = \frac{4}{\pi}g(u, v)$, hence $c=4/\pi$. # * Accptance criterion # $$ # \frac{f(U,V)}{h(U,V)} = \frac{\mathbb{I}_{\{U^2+V^2 < 1\}}}{\mathbb{I}_{\{|U| < 1, |V| < 1\}}} # = \begin{cases} 1, & \text{if } U^2 + V^2 < 1 \\ 0, & \text{otherwise} \end{cases} # $$ # * Thus we accept the sample from $g$ iff $U^2 + V^2 < 1$. # ### Example: gamma random numbers # # Recall that the Gamma distribution with shape parameter $\alpha$ and scale parameter $\beta$ has density # $$ # f_{\Gamma}(x; \alpha, \beta) = \frac{1}{\Gamma(\alpha)\beta^{\alpha}}x^{\alpha-1}e^{-x/\beta} # $$ # for $x \ge 0$. If $X \sim \text{Gamma}(\alpha, \beta)$, then $cX \sim \text{Gamma}(\alpha, c\beta)$. # Hence it suffices to sample from $\text{Gamma}(\alpha, 1)$. Furthermore, $X\sim \text{Gamma}(\alpha, 1)$ and $\alpha > 1$, then $X \stackrel{d}{=} Y + Z$ where $Y \sim \text{Gamma}(\lfloor \alpha \rfloor, 1)$, $Z \sim \text{Gamma}(\alpha - \lfloor \alpha \rfloor, 1)$ and independent of $Y$. # The $Y$ can be generated by summing $\lfloor \alpha \rfloor$ independent $\text{Exp}(1)$ random variables. # Therefore we only need to sample from # $\text{Gamma}(\alpha, 1)$, $\alpha \in (0, 1)$. # # If $0 < \alpha < 1$, we see that # $$ # x^{\alpha - 1}e^{-x} \le \begin{cases} x^{\alpha - 1}, & \text{if } 0 \le x \le 1, \\ e^{-x}, & \text{otherwise}. \end{cases} # $$ # Thus we choose # $$ # h(x) = \begin{cases} x^{\alpha - 1}/\Gamma(\alpha), & \text{if } 0 \le x \le 1, \\ e^{-x}/\Gamma(\alpha), & \text{otherwise}. \end{cases} # $$ # leading to # $$ # g(x) = \begin{cases} x^{\alpha - 1}/(1/\alpha + 1/e), & \text{if } 0 \le x \le 1, \\ e^{-x}/(1/\alpha + 1/e), & \text{otherwise}. \end{cases} # $$ # and # $$ # c = \frac{1}{\Gamma(\alpha)}\left(\frac{1}{\alpha} + \frac{1}{e}\right). # $$ # Density $g$ has cdf # $$ # G(x) = \begin{cases} x^{\alpha}/(1 + \alpha/e), & \text{if } 0 \le x \le 1, \\ # \frac{1 + \alpha/e - \alpha e^{-x}}{1 + \alpha/e}, & \text{otherwise}. \end{cases} # $$ # whose inverse is # $$ # G^{-1}(u) = \begin{cases} [(1 + \alpha/e)u]^{1/\alpha}, & \text{if } 0 \le u \le 1/[1+\alpha/e], \\ # -\log(1/\alpha + 1/e) - \log(1 - u), & 1/[1+\alpha/e] \le u < 1. \end{cases} # $$ # # # + gengamma_ar <- function(nsamp, alph) { # sample X from g v <- runif(nsamp) # unif rv for inverse method idx <- v > 1 / (1 + alph * exp(-1)) x <- numeric(nsamp) x[idx] = -log(1 / alph + exp(-1)) - log(1 - v[idx]) x[!idx] = ((1 + alph * exp(-1)) * v[!idx])^(1 / alph) # test acceptance u <- runif(nsamp) idx2 <- (x > 1) accept <- logical(nsamp) accept[idx2] <- (u[idx2] < x[idx2]^(alph - 1)) accept[!idx2] <- (u[!idx2] < exp(-x[!idx2])) x[accept] } n <- 2000 alph <- 0.5 x <- gengamma_ar(n, alph) length(x) length(x) / n # - gamma(0.5) / (1 / alph + exp(-1) ) # acceptance ratio gamdata <- data.frame(x = x) plt <- ggplot(gamdata, aes(x=x)) + geom_histogram(binwidth=0.25, fill="white", color="black") + geom_density(aes(y=0.25 * ..count..), color="purple") print(plt) # ## Multivariate random numbers # # ### Batch methods # # #### Multinomial # # Suppose we want to sample $(X_1, \dotsc, X_k) \sim \text{mult}(n, p_1, \dotsc, p_k)$, where $\sum_{i=1}^k p_i = 1$, $\sum_{i=1}^n X_i = n$. # # * Method 1: draw $n$ independent realization from pmf $(p_1, \dotsc, p_k)$. # + An easy way is to declare category $j$, $j \in \{1, \dotsc, k\}$ if # $$ # U \in \left[\sum_{k=0}^{j-1} p_k, \sum_{k=0}^{j} p_k \right), # $$ # where $U \sim \text{unif}[0, 1]$. Take $p_0 = 0$. # # * Method 2: sample $k$ independent Poisson random variables $(X_1, \dotsc, X_k)$ with means $\lambda p_1, \dotsc, \lambda p_k$. If the total number of successes $\sum_{i=1}^k X_i$ is equal to $n$, then the conditional distribution of $(X_1, \dotsc, X_k)$ is the desired multinomial. # + We must be fortunate for the sum to be exactly $n$. # # #### Multivariate normal # # Suppose we want to sample $\mathbf{X} = (X_1, \dotsc, X_p)$ from MVN $N(\boldsymbol{\mu}, \boldsymbol{\Sigma})$. If we can find $\mathbf{L} \in \mathbb{R}^{p \times p}$ such that $\mathbf{L}^T\mathbf{L} = \boldsymbol{\Sigma}$, then # $$ # \mathbf{L}^T\mathbf{Z} + \boldsymbol{\mu}, \quad \mathbf{Z} \sim N(0, \mathbf{I}_p) # $$ # follows the desired distribution. # # * Random vector $\mathbf{Z}$ consists of $p$ independent standard normal random numbers. # # * Possible choices of $\mathbf{L}$ are: # # + Cholesky decomposition of $\boldsymbol{\Sigma}$: $\mathbf{L}$ is lower triangular. # + Matrix square root: if $\mathbf{Q}\boldsymbol{\Lambda}\mathbf{Q}^T$ is a eigenvalue decomposition of $\boldsymbol{\Sigma}$, where $\boldsymbol{\Lambda} = \text{diag}(\lambda_1, \dotsc, \lambda_p)$ with $\lambda_i \ge 0$, then # $$ # \mathbf{L} = \mathbf{Q}\boldsymbol{\Lambda}^{1/2}\mathbf{Q}^T, # \quad # \boldsymbol{\Lambda}^{1/2} = \text{diag}(\lambda_1^{1/2}, \dotsc, \lambda_p^{1/2}) # . # $$ # such a matrix is symmetric and positive semidefinite, and is often denoted by $\boldsymbol{\Sigma}^{1/2}$. # # * For large $p$, finding such a decomposition is challenging. # # #### Multivariate $t$ # # A multivariate $t$ distribution with degrees of freedom $\nu$, scale matrix $\boldsymbol{\Sigma}$, and location vector $\boldsymbol{\mu}$ is the distribution of the random vector # $$ # \mathbf{T} = \frac{1}{\sqrt{\chi^2_{\nu}/\nu}}\mathbf{X} + \boldsymbol{\mu}, # $$ # where $\mathbf{X} \sim N(0, \boldsymbol{\Sigma})$, and $\chi^2_{\nu}$ is the chi-square random variable with $\nu$ degrees of freedom, independent of $\mathbf{X}$. # ### Sequential sampling # # In many cases we can sample a random vector by sampling each component in turn and conditioning: # \begin{align*} # p_{X_1, \dotsc, X_p}(x_1, \dotsc, x_p) # &= p_{X_1}(x_1)\prod_{j=2}^p p_{X_j|X_1, \dotsc, X_{j-1}}(x_j | x_1, \dotsc, x_{j-1}) \\ # &= p_{X_1}(x_1)p_{X_2|X_1}(x_2|x_1) \prod_{j=3}^p p_{X_j|X_2, \dotsc, X_{j-1}}(x_j | x_1, \dotsc, x_{j-1}) \\ # &= \dotsb # \end{align*} # # #### Multinomial # # For $(X_1, \dotsc, X_k) \sim \text{mult}(n, p_1, \dotsc, p_k)$, it is immediate to see that $X_1 \sim B(n, p_1)$. Given $X_1 = x_1$, $(X_2, \dotsc, X_k) \sim \text{mult}(n - x_1, p_2 / (1 - p_1), \dotsc, p_k / (1 - p_1) )$. Hence $X_2 | \{X_1 = x_1\} \sim B(n - x_1, p_2 / (1 - p_1) )$ and so on. # # #### Multivariate normal # # If we want to sample $\mathbf{X} = (X_1, \dotsc, X_p)$ from MVN with mean $\boldsymbol{\mu}=(\mu_1, \dotsc, \mu_p)^T$ and covariance matrix $\boldsymbol{\Sigma} = (\sigma_{ij})$, then note that the first component $X_1 \sim N(\mu_1, \sigma_{11})$. From the conditional distribution formula for multivariate normal, we see that # $$ # (X_2, \dotsc, X_p) | \{X_1 = x_1\} # \sim N(\bar{\boldsymbol{\mu}}, \bar{\boldsymbol{\Sigma}}), # \quad # \bar{\boldsymbol{\mu}} = \boldsymbol{\mu}_2 + \boldsymbol{\Sigma}_{12}^T(x_1 - \mu_1)/\sigma_{11}, # ~ # \bar{\boldsymbol{\Sigma}} = \boldsymbol{\Sigma}_{22} - \boldsymbol{\Sigma}_{12}^T\boldsymbol{\Sigma}_{12}/\sigma_{11} # $$ # if we partition # \begin{align*} # \boldsymbol{\mu} &= (\mu_1, \boldsymbol{\mu}_2)^T \\ # \boldsymbol{\Sigma} &= \begin{bmatrix} # \sigma_{11} & \boldsymbol{\Sigma}_{12} \\ # \boldsymbol{\Sigma}_{12}^T & \boldsymbol{\Sigma}_{22} # \end{bmatrix} # . # \end{align*}
29,615
/Predicting Salary(beginner).ipynb
2089d1e2d776198e365b2a347363a2865ecf930a
[]
no_license
OhmVikrant/Prediciting-salary
https://github.com/OhmVikrant/Prediciting-salary
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
22,389
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Numerical Gradient Checking # We would highly recommend looking at `neural_networks.grad_check.check_gradients` and making sure you understand how numerical gradient checking is being carried out. This function is used in the notebook to check the gradients of the neural network layers you write. Make sure to check the gradient of a layer after finishing its implementation. # # The function returns the relative error of the numerical gradient (approximated using finite differences) with respect to the analytical gradient (computed via backpropagation). Correct implementations should get very small errors, usually less than `1e-8` for 64-bit float matrices (the default). # + # %load_ext autoreload # %autoreload 2 import numpy as np from neural_networks.utils.grad_check import check_gradients from neural_networks.layers import FullyConnected, Elman, Conv2D from neural_networks.activations import Linear, Sigmoid, TanH, ReLU, SoftMax # - # ## Gradient Checks for Activation Functions # ### Linear Activation # + X = np.random.randn(2, 3) dLdY = np.random.randn(2, 3) # initialize a fully connected layer # and perform a forward and backward pass linear_activation = Linear() _ = linear_activation.forward(X) grad = linear_activation.backward(X, dLdY) # check the gradients w.r.t. each parameter print( f"Relative error for linear activation:", check_gradients( fn=linear_activation.forward, # the function we are checking grad=grad, # the analytically computed gradient x=X, # the variable w.r.t. which we are taking the gradient dLdf=dLdY, # gradient at previous layer ) ) # - # ### Sigmoid Activation # + X = np.random.randn(2, 3) dLdY = np.random.randn(2, 3) # initialize a fully connected layer # and perform a forward and backward pass sigmoid_activation = Sigmoid() _ = sigmoid_activation.forward(X) grad = sigmoid_activation.backward(X, dLdY) # check the gradients w.r.t. each parameter print( f"Relative error for sigmoid activation:", check_gradients( fn=sigmoid_activation.forward, # the function we are checking grad=grad, # the analytically computed gradient x=X, # the variable w.r.t. which we are taking the gradient dLdf=dLdY, # gradient at previous layer ) ) # - # ### Tanh Activation # + X = np.random.randn(2, 3) dLdY = np.random.randn(2, 3) # initialize a fully connected layer # and perform a forward and backward pass tanh_activation = TanH() _ = tanh_activation.forward(X) grad = tanh_activation.backward(X, dLdY) # check the gradients w.r.t. each parameter print( f"Relative error for tanh activation:", check_gradients( fn=tanh_activation.forward, # the function we are checking grad=grad, # the analytically computed gradient x=X, # the variable w.r.t. which we are taking the gradient dLdf=dLdY, # gradient at previous layer ) ) # - # ### ReLU Activation # + X = np.random.randn(2, 3) dLdY = np.random.randn(2, 3) # initialize a fully connected layer # and perform a forward and backward pass relu_activation = ReLU() out = relu_activation.forward(X) grad = relu_activation.backward(X, dLdY) # check the gradients w.r.t. each parameter print( f"Relative error for relu activation:", check_gradients( fn=relu_activation.forward, # the function we are checking grad=grad, # the analytically computed gradient x=X, # the variable w.r.t. which we are taking the gradient dLdf=dLdY, # gradient at previous layer ) ) # - # ### Softmax Activation # + X = np.random.randn(2, 3) dLdY = np.random.randn(2, 3) # initialize a fully connected layer # and perform a forward and backward pass softmax_activation = SoftMax() _ = softmax_activation.forward(X) grad = softmax_activation.backward(X, dLdY) # check the gradients w.r.t. each parameter print( f"Relative error for softmax activation:", check_gradients( fn=softmax_activation.forward, # the function we are checking grad=grad, # the analytically computed gradient x=X, # the variable w.r.t. which we are taking the gradient dLdf=dLdY, # gradient at previous layer ) ) # - # ### Cross Entropy # + from neural_networks.losses import CrossEntropy num_pts = 5 num_classes = 6 # one-hot encoded y y_idxs = np.random.randint(0, num_classes, (num_pts,)) y = np.zeros((num_pts, num_classes)) y[range(num_pts), y_idxs] = 1 # normalized predictions scores = np.random.uniform(0, 1, size=(num_pts, num_classes)) y_hat = scores / scores.sum(axis=1, keepdims=True) cross_entropy_loss = CrossEntropy("cross_entropy") def forward_fn(Y, Y_hat): def inner_forward(Y_hat): return cross_entropy_loss.forward(Y, Y_hat) return inner_forward loss = cross_entropy_loss.forward(y, y_hat) grad = cross_entropy_loss.backward(y, y_hat) print( f"Relative error for cross entropy loss:", check_gradients( fn=forward_fn(y, y_hat), # the function we are checking grad=grad, # the analytically computed gradient x=y_hat, # the variable w.r.t. which we are taking the gradient dLdf=1, # gradient at previous layer ) ) # - # ## Gradient Checks for Full Layers (Linear Activations) # ### Fully Connected Layer # + X = np.random.randn(2, 3) dLdY = np.random.randn(2, 4) # initialize a fully connected layer # and perform a forward and backward pass fc_layer = FullyConnected(n_out=4, activation="relu") _ = fc_layer.forward(X) _ = fc_layer.backward(dLdY) # check the gradients w.r.t. each parameter for param in fc_layer.parameters: print( f"Relative error for {param}:", check_gradients( fn=fc_layer.forward_with_param(param, X), # the function we are checking grad=fc_layer.gradients[param], # the analytically computed gradient x=fc_layer.parameters[param], # the variable w.r.t. which we are taking the gradient dLdf=dLdY, # gradient at previous layer ) ) # - # ### Elman Recurrent Layer # + X = np.random.randn(2, 3, 4) dLdY = np.random.randn(2, 5) # initialize a recurrent layer # and perform a forward and backward pass elman_layer = Elman(n_out=5, activation="linear") _ = elman_layer.forward(X) _ = elman_layer.backward(dLdY) # check the gradients w.r.t. each parameter for param in elman_layer.parameters: # check the gradient print( f"Relative error for {param}:", check_gradients( fn=elman_layer.forward_with_param(param, X), # the function we are checking grad=elman_layer.gradients[param], # the analytically computed gradient x=elman_layer.parameters[param], # the variable w.r.t. which we are taking the gradient dLdf=dLdY, # gradient at previous layer ) ) # - # ### Conv Layer # + X = np.random.randn(2, 5, 6, 7) dLdY = np.random.randn(2, 5, 6, 4) # initialize a fully connected layer # and perform a forward and backward pass conv_layer = Conv2D( n_out=4, kernel_shape=(3, 3), activation="linear", weight_init="uniform", pad="same", ) _ = conv_layer.forward(X) _ = conv_layer.backward(dLdY) # check the gradients w.r.t. each parameter for param in conv_layer.parameters: print( f"Relative error for {param}:", check_gradients( fn=conv_layer.forward_with_param(param, X), # the function we are checking grad=conv_layer.gradients[param], # the analytically computed gradient x=conv_layer.parameters[param], # the variable w.r.t. which we are taking the gradient dLdf=dLdY, # gradient at previous layer ) ) # + from neural_networks.losses import CrossEntropy num_pts = 5 num_classes = 6 # one-hot encoded y y_idxs = np.random.randint(0, num_classes, (num_pts,)) y = np.zeros((num_pts, num_classes)) y[range(num_pts), y_idxs] = 1 # normalized predictions scores = np.random.uniform(0, 1, size=(num_pts, num_classes)) y_hat = scores / scores.sum(axis=1, keepdims=True) cross_entropy_loss = CrossEntropy("cross_entropy") def forward_fn(Y, Y_hat): def inner_forward(Y_hat): return cross_entropy_loss.forward(Y, Y_hat) return inner_forward loss = cross_entropy_loss.forward(y, y_hat) grad = cross_entropy_loss.backward(y, y_hat) print( f"Relative error for cross entropy loss:", check_gradients( fn=forward_fn(y, y_hat), # the function we are checking grad=grad, # the analytically computed gradient x=y_hat, # the variable w.r.t. which we are taking the gradient dLdf=1, # gradient at previous layer ) ) # -
9,040
/runs/GS24.L75/GS24.L75-MAA008/.ipynb_checkpoints/2019-06-21-AA-monitoring-NACHOS12.L75-MAA4001-GS24.L75-MAA008-checkpoint.ipynb
8192e3420db44d2c747e7423301b0944fdf33a73
[]
no_license
auraoupa/CMEMS-simus-regionales
https://github.com/auraoupa/CMEMS-simus-regionales
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
6,314,008
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + ## imports import sys import numpy as np import numpy.ma as ma import xarray as xr import matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy.feature as cfeature import cmocean # %matplotlib inline # - config1='NACHOS12.L75' case1 = 'MAA4001' config2='GS24.L75' case2 = 'MAA008' year = '2010' #to do on cal1 # # mkdir -p /PLOTS/NACHOS12.L75-${case1}-${case2}/python # transfer_mean_from_occigen.ksh # + def all_plots(case1,case2,year,**kwargs): dirmean1='/mnt/albert/equipes/IGE/meom/workdir/albert/NACHOS12.L75/NACHOS12.L75-'+case1+'-MEAN/1d/'+year+'/' dirmean2='/mnt/albert/equipes/IGE/meom/workdir/albert/NACHOS12.L75/NACHOS12.L75-'+case2+'-MEAN/1d/'+year+'/' title="NACHOS12.L75 "+case1+"-"+case2+" "+year gridfile='/mnt/albert/equipes/IGE/meom/workdir/albert/NACHOS12.L75/NACHOS12.L75-I/NACHOS12.L75-MAA4001_mesh_mask.nc' file1flxT=dirmean1+'NACHOS12.L75-'+case1+'_y'+year+'.1d_flxT.nc' file1T=dirmean1+'NACHOS12.L75-'+case1+'_y'+year+'.1d_gridT.nc' file1EKE=dirmean1+'NACHOS12.L75-'+case1+'_y'+year+'.1d_EKE.nc' file1MXL03=dirmean1+'NACHOS12.L75-'+case1+'_y'+year+'m03.1d_MXL.nc' file1MXL09=dirmean1+'NACHOS12.L75-'+case1+'_y'+year+'m09.1d_MXL.nc' # file1ICE03=dirmean1+'NACHOS12.L75-'+case1+'_y'+year+'m03.1d_icemod3.nc' # file1ICE09=dirmean1+'NACHOS12.L75-'+case1+'_y'+year+'m09.1d_icemod3.nc' file1PSI=dirmean1+'NACHOS12.L75-'+case1+'_y'+year+'.1d_PSI.nc' file2flxT=dirmean2+'NACHOS12.L75-'+case2+'_y'+year+'.1d_flxT.nc' file2T=dirmean2+'NACHOS12.L75-'+case2+'_y'+year+'.1d_gridT.nc' file2EKE=dirmean2+'NACHOS12.L75-'+case2+'_y'+year+'.1d_EKE.nc' file2MXL03=dirmean2+'NACHOS12.L75-'+case2+'_y'+year+'m03.1d_MXL.nc' file2MXL09=dirmean2+'NACHOS12.L75-'+case2+'_y'+year+'m09.1d_MXL.nc' # file2ICE03=dirmean2+'NACHOS12.L75-'+case2+'_y'+year+'m03.1d_icemod3.nc' # file2ICE09=dirmean2+'NACHOS12.L75-'+case2+'_y'+year+'m09.1d_icemod3.nc' file2PSI=dirmean2+'NACHOS12.L75-'+case2+'_y'+year+'.1d_PSI.nc' dsgrid=xr.open_dataset(gridfile) lat=dsgrid.nav_lat lon=dsgrid.nav_lon masksurf=dsgrid.tmaskutil[0] mask=dsgrid.tmask[0] ds1T=xr.open_dataset(file1T) tem1=ds1T.votemper[0] sal1=ds1T.vosaline[0] ssh1=ds1T.sossheig[0] tem1_ma=np.ma.array(tem1,mask=1-mask) sal1_ma=np.ma.array(sal1,mask=1-mask) ssh1_ma=np.ma.array(ssh1,mask=1-masksurf) ds2T=xr.open_dataset(file2T) tem2=ds2T.votemper[0] sal2=ds2T.vosaline[0] ssh2=ds2T.sossheig[0] tem2_ma=np.ma.array(tem2,mask=1-mask) sal2_ma=np.ma.array(sal2,mask=1-mask) ssh2_ma=np.ma.array(ssh2,mask=1-masksurf) ds1MXL03=xr.open_dataset(file1MXL03) mxl103_rho010=ds1MXL03.somxl010[0] mxl103_rho030=ds1MXL03.somxl030[0] mxl103_t02=ds1MXL03.somxlt02[0] ds1MXL09=xr.open_dataset(file1MXL09) mxl109_rho010=ds1MXL09.somxl010[0] mxl109_rho030=ds1MXL09.somxl030[0] mxl109_t02=ds1MXL09.somxlt02[0] mxl103_rho010_ma=np.ma.array(mxl103_rho010,mask=1-masksurf) mxl103_rho030_ma=np.ma.array(mxl103_rho030,mask=1-masksurf) mxl103_t02_ma=np.ma.array(mxl103_t02,mask=1-masksurf) mxl109_rho010_ma=np.ma.array(mxl109_rho010,mask=1-masksurf) mxl109_rho030_ma=np.ma.array(mxl109_rho030,mask=1-masksurf) mxl109_t02_ma=np.ma.array(mxl109_t02,mask=1-masksurf) ds2MXL03=xr.open_dataset(file2MXL03) mxl203_rho010=ds2MXL03.somxl010[0] mxl203_rho030=ds2MXL03.somxl030[0] mxl203_t02=ds2MXL03.somxlt02[0] ds2MXL09=xr.open_dataset(file2MXL09) mxl209_rho010=ds2MXL09.somxl010[0] mxl209_rho030=ds2MXL09.somxl030[0] mxl209_t02=ds2MXL09.somxlt02[0] mxl203_rho010_ma=np.ma.array(mxl203_rho010,mask=1-masksurf) mxl203_rho030_ma=np.ma.array(mxl203_rho030,mask=1-masksurf) mxl203_t02_ma=np.ma.array(mxl203_t02,mask=1-masksurf) mxl209_rho010_ma=np.ma.array(mxl209_rho010,mask=1-masksurf) mxl209_rho030_ma=np.ma.array(mxl209_rho030,mask=1-masksurf) mxl209_t02_ma=np.ma.array(mxl209_t02,mask=1-masksurf) ds1EKE=xr.open_dataset(file1EKE) eke1=ds1EKE.voeke[0,0] eke1_ma=np.ma.array(eke1,mask=1-masksurf) ds2EKE=xr.open_dataset(file2EKE) eke2=ds2EKE.voeke[0,0] eke2_ma=np.ma.array(eke2,mask=1-masksurf) ds1PSI=xr.open_dataset(file1PSI) psi1=ds1PSI.sobarstf[0] psi1_ma=np.ma.array(psi1,mask=1-masksurf) ds2PSI=xr.open_dataset(file2PSI) psi2=ds2PSI.sobarstf[0] psi2_ma=np.ma.array(psi2,mask=1-masksurf) ds1flxT=xr.open_dataset(file1flxT) Heat1=ds1flxT.sohefldo[0] WaterFlx1=ds1flxT.sowaflup[0] WaterDmp1=ds1flxT.sowafld[0] Heat1_ma=np.ma.array(Heat1,mask=1-masksurf) WaterFlx1_ma=np.ma.array(WaterFlx1,mask=1-masksurf) WaterDmp1_ma=np.ma.array(WaterDmp1,mask=1-masksurf) ds2flxT=xr.open_dataset(file2flxT) Heat2=ds2flxT.sohefldo[0] WaterFlx2=ds2flxT.sowaflup[0] WaterDmp2=ds2flxT.sowafld[0] Heat2_ma=np.ma.array(Heat2,mask=1-masksurf) WaterFlx2_ma=np.ma.array(WaterFlx2,mask=1-masksurf) WaterDmp2_ma=np.ma.array(WaterDmp2,mask=1-masksurf) # ds1ICE03=xr.open_dataset(file1ICE03) # iconc103=ds1ICE03.siconc[0] # ivolu103=ds1ICE03.sivolu[0] # ds1ICE09=xr.open_dataset(file1ICE09) # iconc109=ds1ICE09.siconc[0] # ivolu109=ds1ICE09.sivolu[0] # ds2ICE03=xr.open_dataset(file2ICE03) # iconc203=ds2ICE03.siconc[0] # ivolu203=ds2ICE03.sivolu[0] # ds2ICE09=xr.open_dataset(file2ICE09) # iconc209=ds2ICE09.siconc[0] # ivolu209=ds2ICE09.sivolu[0] def plot_glob(fig,sub,var,vmin,vmax,unit,name,pal): ax = fig.add_subplot(sub,projection=ccrs.Orthographic(central_longitude=-30, central_latitude=35)) cmap = plt.get_cmap(pal) cmap.set_under(color='grey') pcolor=ax.pcolormesh(lon,lat,ma.masked_invalid(var),transform=ccrs.PlateCarree(),cmap=cmap,vmin=vmin,vmax=vmax) ax.set_global() ax.add_feature(cfeature.LAND,facecolor='grey') ax.coastlines() cbar=plt.colorbar(pcolor,orientation='vertical',fraction=0.026,pad=0.1) cbar.ax.tick_params(labelsize=20) ax.set_title(name+' '+unit,size=17) def plot_glob_diff(fig,sub,var1,var2,vmin,vmax,unit,name,pal): ax = fig.add_subplot(sub,projection=ccrs.Orthographic(central_longitude=-30, central_latitude=35)) cmap = plt.get_cmap(pal) cmap.set_under(color='grey') pcolor=ax.pcolormesh(lon,lat,ma.masked_invalid(var1-var2),transform=ccrs.PlateCarree(),cmap=cmap,vmin=vmin,vmax=vmax) ax.set_global() ax.add_feature(cfeature.LAND,facecolor='grey') ax.coastlines() cbar=plt.colorbar(pcolor,orientation='vertical',fraction=0.026,pad=0.1) cbar.ax.tick_params(labelsize=20) ax.set_title(name+' '+unit,size=17) def plot_atl(fig,sub,var,vmin,vmax,unit,name,pal): ax = fig.add_subplot(sub,projection=ccrs.PlateCarree(central_longitude=-30)) cmap = plt.get_cmap(pal) ax.set_extent([-100, 50, 0, 70]) cmap.set_under(color='grey') pcolor=ax.pcolormesh(lon,lat,ma.masked_invalid(var),transform=ccrs.PlateCarree(),cmap=cmap,vmin=vmin,vmax=vmax) ax.add_feature(cfeature.LAND,facecolor='grey') ax.coastlines() ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=2, color='gray', alpha=0.5, linestyle='--') fig.subplots_adjust(right=0.8) ax.text(-0.07, 0.55, 'Latitude (in degree)', va='bottom', ha='center', rotation='vertical', rotation_mode='anchor', transform=ax.transAxes) ax.text(0.5, -0.2, 'Longitude (in degree)', va='bottom', ha='center', rotation='horizontal', rotation_mode='anchor', transform=ax.transAxes) cbar = plt.colorbar(pcolor,orientation='horizontal',shrink=0.75) ax.set_title(name+' '+unit,size=17,y=1.08) def plot_atl_cont(fig,sub,var,unit,name,vmin,vmax,pal): ax = fig.add_subplot(sub,projection=ccrs.PlateCarree(central_longitude=-30)) ax.set_extent([-100, 50, 0, 70]) cmap = plt.get_cmap(pal) cmap.set_under(color='grey') pcolor=ax.pcolormesh(lon,lat,ma.masked_invalid(var),transform=ccrs.PlateCarree(),cmap=cmap,vmin=vmin,vmax=vmax) pcont=ax.contour(lon,lat,ma.masked_invalid(var),10,colors='k',transform=ccrs.PlateCarree()) ax.add_feature(cfeature.LAND,facecolor='black') ax.coastlines() ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=2, color='grey', alpha=0.5, linestyle='--') fig.subplots_adjust(right=0.8) ax.text(-0.07, 0.55, 'Latitude (in degree)', va='bottom', ha='center', rotation='vertical', rotation_mode='anchor', transform=ax.transAxes) ax.text(0.5, -0.2, 'Longitude (in degree)', va='bottom', ha='center', rotation='horizontal', rotation_mode='anchor', transform=ax.transAxes) cbar = plt.colorbar(pcolor,orientation='horizontal',shrink=0.75) ax.set_title(name+' '+unit,size=17,y=1.08) def plot_natl(fig,sub,var,vmin,vmax,unit,name,pal): ax = fig.add_subplot(sub,projection=ccrs.PlateCarree(central_longitude=-30)) ax.set_extent([-100, 50, 50, 70]) cmap = plt.get_cmap(pal) cmap.set_under(color='grey') pcolor=ax.pcolormesh(lon,lat,ma.masked_invalid(var),transform=ccrs.PlateCarree(),cmap=cmap,vmin=vmin,vmax=vmax) ax.add_feature(cfeature.LAND,facecolor='grey') ax.coastlines() ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=2, color='gray', alpha=0.5, linestyle='--') fig.subplots_adjust(right=0.8) ax.text(-0.07, 0.55, 'Latitude (in degree)', va='bottom', ha='center', rotation='vertical', rotation_mode='anchor', transform=ax.transAxes) ax.text(0.5, -0.4, 'Longitude (in degree)', va='bottom', ha='center', rotation='horizontal', rotation_mode='anchor', transform=ax.transAxes) cbar = plt.colorbar(pcolor,orientation='horizontal',shrink=0.75) ax.set_title(name+' '+unit,size=17,y=1.19) # Tous les plots # Tous les plots glob # Eke, SSH,T et S fig = plt.figure(figsize=(22,7)) plot_glob(fig,131,10000*eke1_ma,0,2500,'',case1,cmocean.cm.amp) plot_glob(fig,132,10000*eke2_ma,0,2500,'',case2,cmocean.cm.amp) plot_glob(fig,133,10000*eke1_ma-10000*eke2_ma,-2500,2500,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Surf EKE 1e4m2s '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_glob(fig,131,tem1_ma[0],-2,30,'',case1,cmocean.cm.thermal) plot_glob(fig,132,tem2_ma[0],-2,30,'',case2,cmocean.cm.thermal) plot_glob(fig,133,tem1_ma[0]-tem2_ma[0],-2,2,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Surf Temperature deg C '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_glob(fig,131,sal1_ma[0],30,40,'',case1,cmocean.cm.haline) plot_glob(fig,132,sal2_ma[0],30,40,'',case2,cmocean.cm.haline) plot_glob(fig,133,sal1_ma[0]-sal2_ma[0],-1,1,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Surf Salinity PSU '+year, fontsize=25) #MXL fig = plt.figure(figsize=(22,7)) plot_glob(fig,131,mxl103_rho010_ma,0,1500,'',case1,cmocean.cm.tempo) plot_glob(fig,132,mxl203_rho010_ma,0,1500,'',case2,cmocean.cm.tempo) plot_glob(fig,133,mxl103_rho010_ma-mxl203_rho010_ma,-100,100,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 March MXL rho010 m '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_glob(fig,131,mxl109_rho010_ma,0,200,'',case1,cmocean.cm.tempo) plot_glob(fig,132,mxl209_rho010_ma,0,200,'',case2,cmocean.cm.tempo) plot_glob(fig,133,mxl109_rho010_ma-mxl209_rho010_ma,-50,50,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Sept MXL rho010 m '+year, fontsize=25) #flx fig = plt.figure(figsize=(22,7)) plot_glob(fig,131,Heat1_ma,-400,400,'',case1,cmocean.cm.solar) plot_glob(fig,132,Heat2_ma,-400,400,'',case2,cmocean.cm.solar) plot_glob(fig,133,Heat1_ma-Heat2_ma,-100,100,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Net Heat Flux '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_glob(fig,131,86400*WaterFlx1_ma,-9,7,'',case1,cmocean.cm.balance) plot_glob(fig,132,86400*WaterFlx2_ma,-9,7,'',case2,cmocean.cm.balance) plot_glob(fig,133,86400*WaterFlx1_ma-86400*WaterFlx2_ma,-2,2,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Water Flux '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_glob(fig,131,86400*WaterDmp1_ma,-7,7,'',case1,cmocean.cm.balance) plot_glob(fig,132,86400*WaterDmp2_ma,-7,7,'',case2,cmocean.cm.balance) plot_glob(fig,133,86400*WaterDmp1_ma-86400*WaterDmp2_ma,-2,2,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Water Damping '+year, fontsize=25) # Tous les plots Atlantique # T & S fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,tem1_ma[0],-2,30,'',case1,cmocean.cm.thermal) plot_atl(fig,132,tem2_ma[0],-2,30,'',case2,cmocean.cm.thermal) plot_atl(fig,133,tem1_ma[0]-tem2_ma[0],-2,2,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Surf Temperature deg C '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,sal1_ma[0],30,40,'',case1,cmocean.cm.haline) plot_atl(fig,132,sal2_ma[0],30,40,'',case2,cmocean.cm.haline) plot_atl(fig,133,sal1_ma[0]-sal2_ma[0],-1,1,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Surf Salinity PSU '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,tem1_ma[30],-2,30,'',case1,cmocean.cm.thermal) plot_atl(fig,132,tem2_ma[30],-2,30,'',case2,cmocean.cm.thermal) plot_atl(fig,133,tem1_ma[30]-tem2_ma[30],-2,2,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 200m Temperature deg C '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,sal1_ma[30],30,40,'',case1,cmocean.cm.haline) plot_atl(fig,132,sal2_ma[30],30,40,'',case2,cmocean.cm.haline) plot_atl(fig,133,sal1_ma[30]-sal2_ma[30],-2,2,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 200m Salinity PSU '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,tem1_ma[46],-2,30,'',case1,cmocean.cm.thermal) plot_atl(fig,132,tem2_ma[46],-2,30,'',case2,cmocean.cm.thermal) plot_atl(fig,133,tem1_ma[46]-tem2_ma[46],-2,2,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 1000m Temperature deg C '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,sal1_ma[46],30,40,'',case1,cmocean.cm.haline) plot_atl(fig,132,sal2_ma[46],30,40,'',case2,cmocean.cm.haline) plot_atl(fig,133,sal1_ma[46]-sal2_ma[46],-1,1,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 1000m Salinity PSU '+year, fontsize=25) #PSI fig = plt.figure(figsize=(22,7)) plot_atl_cont(fig,131,1e-7*psi1_ma,'',case1,-4,4,'Blues') plot_atl_cont(fig,132,1e-7*psi2_ma,'',case2,-4,4,'Blues') plot_atl_cont(fig,133,1e-7*psi1_ma-1e-7*psi2_ma,'',case1+'-'+case2,-1,1,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Stream function '+year, fontsize=25) #flx fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,Heat1_ma,-400,400,'',case1,cmocean.cm.solar) plot_atl(fig,132,Heat2_ma,-400,400,'',case2,cmocean.cm.solar) plot_atl(fig,133,Heat1_ma-Heat2_ma,-100,100,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Net Heat Flux '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,86400*WaterFlx1_ma,-9,7,'',case1,cmocean.cm.balance) plot_atl(fig,132,86400*WaterFlx2_ma,-9,7,'',case2,cmocean.cm.balance) plot_atl(fig,133,86400*WaterFlx1_ma-86400*WaterFlx2_ma,-2,2,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Water Flux '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,86400*WaterDmp1_ma,-7,7,'',case1,cmocean.cm.balance) plot_atl(fig,132,86400*WaterDmp2_ma,-7,7,'',case2,cmocean.cm.balance) plot_atl(fig,133,86400*WaterDmp1_ma-86400*WaterDmp2_ma,-2,2,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Water Damping '+year, fontsize=25) #MXL fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,mxl103_rho010_ma,0,1500,'',case1,cmocean.cm.tempo) plot_atl(fig,132,mxl203_rho010_ma,0,1500,'',case2,cmocean.cm.tempo) plot_atl(fig,133,mxl103_rho010_ma-mxl203_rho010_ma,-100,100,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 March MXL rho010 m '+year, fontsize=25) fig = plt.figure(figsize=(22,7)) plot_atl(fig,131,mxl109_rho010_ma,0,200,'',case1,cmocean.cm.tempo) plot_atl(fig,132,mxl209_rho010_ma,0,200,'',case2,cmocean.cm.tempo) plot_atl(fig,133,mxl109_rho010_ma-mxl209_rho010_ma,-50,50,'',case1+'-'+case2,cmocean.cm.balance) fig.suptitle('NACHOS12.L75 Sept MXL rho010 m '+year, fontsize=25) # - all_plots(case1,case2,year)
16,775
/Query 함수.ipynb
6863db3a4042396564e8119e8ab06b88035d8f29
[]
no_license
hyunmin94/hyunmin
https://github.com/hyunmin94/hyunmin
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
50,543
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 결과 비교를 위한 html 수정 from IPython.display import display_html def display_side_by_side(*args): html_str='' for df in args: html_str += df.to_html() display_html(html_str.replace('table','table style="display:inline"'), raw=True) # # 기준 DataFrame데이터 준비 # + import pandas as pd # 실습할 판다스 버전: 1.0.1 data = {"name": ["Atom", "John", "Park", "Kim"], "weight": [50, 60, 80, 30], "height": [150,180,170,200]} people_df = pd.DataFrame(data) display(people_df) # - # # 1) 비교연산자(==) 사용 str_expr = "weight == 50" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("조건 : 몸무게가 50인 데이터") display_side_by_side(people_df, people_df_q) # # 2) 비교연산자(!=) 사용 str_expr = "height != 150" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("조건 : 키가 150이 아닌 데이터") display_side_by_side(people_df, people_df_q) # # 3) 비교연산자(<,<=,>,>=) # + str_expr = "weight < 60" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("※ 비교연산자(<)") print("조건 : 몸무게가 60미만인 데이터") display_side_by_side(people_df, people_df_q) str_expr = "weight <= 60" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("※ 비교연산자(<=)") print("조건 : 몸무게가 60이하인 데이터") display_side_by_side(people_df, people_df_q) str_expr = "weight > 60" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("※ 비교연산자(>)") print("조건 : 몸무게가 60초과인 데이터") display_side_by_side(people_df, people_df_q) str_expr = "weight >= 60" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("※ 비교연산자(>=)") print("조건 : 몸무게가 60이상인 데이터") display_side_by_side(people_df, people_df_q) # - # # 4) in 연산자( in, ==) # # - in 과 == 의 결과는 같다 # - 다수의 조건데이터는 리스트[] 또는 튜플() 형태로 사용가능 str_expr = "weight in [60,30]" # 같은 결과 "weight == [60,30]" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("조건 : 몸무게가 30 이거나 60인 데이터") display_side_by_side(people_df, people_df_q) # # 5) not in 연산자( not in, !=) # # - not in 과 != 의 결과는 같다 # - 다수의 조건데이터는 리스트[] 또는 튜플() 형태로 사용가능 str_expr = "weight != 30" # 같은 결과 "weight not in 30" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("조건 : 몸무게가 30이 아닌 데이터") display_side_by_side(people_df, people_df_q) # # 6) 논리 연산자(and, or, not) # - and : 전체 조건이 참일경우 참 # - or : 전체 조건 중 하나라도 참일경우 참 # - not : 뒤에 오는 조건의 반대(참일경우 거짓, 거짓일경우 참) # + str_expr = "(weight > 50) and (weight <80)" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("※ and 연산자") print("조건 : 몸무게가 50보다 크고 80보다 작은 데이터") display_side_by_side(people_df, people_df_q) str_expr = "(weight == 50) or (weight > 60)" print("※ or 연산자") print("조건 : 몸무게가 50이거나 60보다 큰 데이터") people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 display_side_by_side(people_df, people_df_q) str_expr = "not (weight == 50)" print("※ not 연산자") print("조건 : 몸무게가 50이 아닌 데이터") people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 display_side_by_side(people_df, people_df_q) # - # # 7) 외부 변수 참조 연산 # # # ### 예시) 몸무게가 70보다 크고 키가 200보다 작은 데이터 num_weight = 70 num_height = 200 str_expr = "(weight > @num_weight) and (height <= @num_height)" print("조건 : 몸무게가 70보다 크고 키가 200보다 작은 데이터") people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 display_side_by_side(people_df, people_df_q) # # 8) f-String을 이용한 외부 변수(또는 함수) 참조 연산 # # ### 예시) 몸무게가 70보다 크고 키가 200보다 작은 데이터 num_weight = 70 num_height = 200 str_expr = f"(weight > {num_weight}) and (height <= {num_height})" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("조건 : 몸무게가 70보다 크고 키가 200보다 작은 데이터") display_side_by_side(people_df, people_df_q) # # 9) 함수를 이용한 참조 연산 # # ### 예시) 몸무게가 리스트데이터의 최저값보다 큰 데이터 # + def weight_min(data): return min(data) num_weight = 70 str_expr = "weight > @weight_min([70,90,30,50])" # 몸무게가 30보다 큰 데이터 print("조건 : 몸무게가 리스트데이터([70,90,30,50]) 중 최저값보다 큰 데이터") people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 display_side_by_side(people_df, people_df_q) # - # # 10) 인덱스 검색 # # - 인덱스이름이 있다면 index대신 이름(df.index.name)을 기입합니다. # # ## 예시) 인덱스가 1보다 큰 데이터 str_expr = "index > 1" people_df_q = people_df.query(str_expr) # 조건 부합 데이터 추출 print("조건 : 인덱스(index)가 1보다 큰 데이터") display_side_by_side(people_df, people_df_q) # # 11) 컬럼명.str.contains(비교문자열) # - query 함수 옵션 # # 1) engine : python # - contains 함수 옵션 # # 1) case : True (대소문자 구별, 디폴트), False(대소문자 구별 안함) # # ## 예시) 이름에 'a' 가 들어있는 데이터 # + str_expr = 'name.str.contains("a")' people_df_q = people_df.query(str_expr, engine="python") # 조건 부합 데이터 추출 print("※ 대소문자 구별") print("조건 : 이름에 'a'가 들어가 있는 데이터") display_side_by_side(people_df, people_df_q) str_expr = 'name.str.contains("a",case=False)' people_df_q = people_df.query(str_expr, engine="python") # 조건 부합 데이터 추출 print("※ 대소문자 구별 안함") print("조건 : 이름에 'a'가 들어가 있는 데이터") display_side_by_side(people_df, people_df_q) # - # # 12) 컬럼명.str.startswith(비교문자열) # # ## 예시) 이름이 'P' 로 시작하는 데이터 str_expr = 'name.str.startswith("P")' people_df_q = people_df.query(str_expr, engine="python") # 조건 부합 데이터 추출 print("조건 : 이름이 'P' 로 시작하는 데이터") display_side_by_side(people_df, people_df_q) # # 13) 컬럼명.str.endswith(비교문자열) # # ## 예시) 이름이 'm' 로 끝나는 데이터 str_expr = 'name.str.endswith("m")' people_df_q = people_df.query(str_expr, engine="python") # 조건 부합 데이터 추출 print("조건 : 이름이 'm' 로 끝나는 데이터") display_side_by_side(people_df, people_df_q)
5,526
/smaartpulse_emotion/eye_fashion/insights.ipynb
b961cd33264446fd3d9b5f4bb312bba8d9c1767e
[]
no_license
devanshg-enixta/Smaartpulse-Insights
https://github.com/devanshg-enixta/Smaartpulse-Insights
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
358,428
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 컴피티션 소개 # # 로스만은 7개의 유럽국가에서 3,000개가 넘는 드러그 스토어를 운영하는 회사입니다. 또한, 최근 로스만 매장 관리자는 최대 6주의 일간 판매량을 예측하는 업무를 진행하고 있습니다. 매장의 판매량은 홍보를 포함하여 시장 경쟁, 학교 또는 지역의 휴일, 계절, 지역 등 많은 요인에 의해 영향을 받습니다. 수천 명의 관리자들은 자신들만의 배경 지식을 토대로 판매량을 예측하며, 매우 다양한 결과를 얻고 있습니다. # # 로스만은 독일 전체에 있는 1,115개의 매장 데이터를 바탕으로 다음 6주간의 일간 판매량을 예측하는 문제를 제시했습니다. 정확한 판매량 예측은 매장 관리자들이 효율적으로 직원들의 일정을 관리하고, 생산선 증가 및 동기 부여 관점에서 중요하게 여겨집니다. 우리가 로스만의 판매량을 확고하게 예측하는 모델 생성을 도와줌으로써 매장 관리자들이 더 중요한 것에 집중할 수 있도록 할 수 있습니다. # # 시계열 분석과 Prophet 라이브러리를 사용한 예측 # # ** 목표: ** # # - 데이터를 탐험해보자. (ECDF 라이브러리를 사용해 결측치를 다뤄보자.) # - 매장 형태와 매장 활동의 상관관계 분석을 해보자. # - 시계열 분석을 확장해서 수행해보자. (seasonal_decompose 라이브러리, 경향, 자기 상관성을 고려해보자.) # - Prophet 라이브러리를 사용해 다음 6주간의 판매량을 예측해보자.(Facebook의 방법론이다.) # + [markdown] _cell_guid="1ca6a26a-a225-44de-a28c-445dbb9395cf" _uuid="2afc7d05cfcbeeb1fb4a3d4916ad24be9cedef77" # # Time Series Analysis and Forecasting with Prophet # # **Goal:** # # - Explore the data (ECDF, handle missing values etc). # - Analysis per store type and correlational analysis of stores activity. # - Perform extensive Time Series Analysis (seasonal decomposition, trends, autocorrelation). # - Predict next 6 weeks of sales using Prophet (Facebook methodology). # - # 해당 커널은 시계열 분석에 초점을 두고 있습니다. 중요한 주제는 아직 밝혀지지 않았고, 다음 6주간의 일간 매출을 예측하기 위해 최근 Facebook이 소개한 Prophet이라는 새로운 방법을 사용하려고 합니다. 이 방법론은 휴일에 대해 모델링을 할 수 있는 좋은 특징을 가지고 있습니다. 마지막으로 끝 부분에 Seasonal ARIMA and Prophet의 장단점에 대해 논의하고자 합니다. # # 항상 하던대로 데이터를 살펴보고자 합니다. 데이터의 패턴과 존재하는 경향을 찾아내기 위해 여러가지 척도를 사용할 것이다. 추후의 분석을 위해서 기반을 탄탄하게 다져봅시다. # # **스압주의**: # # "스압주의"지만 스크롤 값을 합니다. 또한 많은 시간이 필요하겠지만, 해당 커널의 이전 버전이나 풀 버전을 [이곳](https://github.com/elena-petrova/rossmann_TSA_forecasts) 에서 확인할 수 있습니다. # # # + [markdown] _cell_guid="c1a12388-2437-443a-b7ad-f916238bd0e5" _uuid="c770837098683cad4c93dfd51ec619256f5fa871" # This notebook mainly focuses on the *Time Series Analysis*. An important topic yet not covered. I use then *new methodology Prophet*, recently introduced by *Facebook,* to predict next 6 week of sales. This methodology has a cool feature of modeling for holidays. Finally, right at the end, I also discuss*advantages and drawbacks of forecasting with Seasonal ARIMA and Prophet.* # # As it usually goes, we start with the Exploratory Data Analysis of the main metrics revealing present trends and patterns in the data, giving a solid foundation for the further (causal) analysis. # <br> # <br> # **WARNING**: # # It's a long read post but it's worth it. It might also need more time to run the script, but you can check out the full and *fast* version of the notebook on the [GitHub repository](https://github.com/elena-petrova/rossmann_TSA_forecasts). # - # 재밌게 보세요! # + [markdown] _cell_guid="ba68b412-66ff-432d-9356-1e15cc9b2e78" _execution_state="idle" _uuid="d93687a3cfa592db7089bb3312a2ce56e220b7ee" # Enjoy the reading! # + [markdown] _cell_guid="1b33c81c-f92e-4557-8277-81e225fc9a1d" _uuid="ca2249880400c77664cb49fc7d92273672a5cbf6" # ![rossmann][1] # # # [1]: https://kaggle2.blob.core.windows.net/competitions/kaggle/4594/media/rossmann_banner2.png # + [markdown] _cell_guid="3afa1a34-f62c-42ec-8396-e77e56dbadbd" _uuid="c559ddee2294a923160c847bc0bca8f2b454f4f8" # --- # + import warnings warnings.filterwarnings('ignore') # 패키지 불러오기 # 기본 + 일 관련 패키지 import numpy as np import pandas as pd from pandas import datetime # 데이터 시각화 패키지 import matplotlib.pyplot as plt import seaborn as sns # 더 괜찮은 시각화 툴 # %matplotlib inline # 통계 패키지 from statsmodels.distributions.empirical_distribution import ECDF # 시계열 분석 패키지 from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.graphics.tsaplots import plot_acf, plot_pacf # 페이스북이 만든 Prophet import prophet # from fbprophet import Prophet 에서 fbprophet이 설치 상 문제가 있어 prophet 패키지로 변경 # + _cell_guid="5a3b3b53-7757-403a-a1d2-8567d8869aaa" _execution_state="idle" _uuid="3b14af2f165220db8d2cb6e0684c25b59e03caa2" import warnings warnings.filterwarnings("ignore") # loading packages # basic + dates import numpy as np import pandas as pd from pandas import datetime # data visualization import matplotlib.pyplot as plt import seaborn as sns # advanced vizs # %matplotlib inline # statistics from statsmodels.distributions.empirical_distribution import ECDF # time series analysis from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.graphics.tsaplots import plot_acf, plot_pacf # prophet by Facebook import prophet # + # 학습을 위한 훈련데이터 불러오기, Date Column이 존재하기 때문에 parse_dates 옵션을 True로 설정하고 날짜별로 정리하기 위해 인덱스로 지정 train = pd.read_csv('input/train.csv', parse_dates = True, index_col = 'Date') # 추가적인 매장 데이터 store = pd.read_csv('input/store.csv') # 시계열 단위 살펴보기 train.index # + _cell_guid="868c6aa5-865a-4386-bd67-5eb1704478f4" _execution_state="idle" _uuid="5a679719e8a8d9e3385459eb1f038d2255ae2a94" # importing train data to learn train = pd.read_csv("input/train.csv", parse_dates = True, low_memory = False, index_col = 'Date') # additional store data store = pd.read_csv("input/store.csv", low_memory = False) # time series as indexes train.index # - # ## 탐험적 데이터 분석 # # 첫번째로 우리는 훈련 데이터와 매장 데이터를 살펴보고 결측치 처리와 추후 분석을 위한 새로운 변수를 만들어 볼 예정입니다. # + [markdown] _cell_guid="1e6bd756-82a9-40b5-8cca-8f4a6fe36ee0" _uuid="649f9da4af596d880618bed9c67485a9b9c35c37" # ## Exploratory Data Analysis # + [markdown] _cell_guid="4830f82b-fa4d-4070-ac54-b1d993dbecbf" _uuid="e36090f02a4a243a210b6df6f46372826a71d61c" # In this first section we go through the train and store data, handle missing values and create new features for further analysis. # - # 훈련 데이터의 형태와 데이터의 값들을 살펴보며 시작해봅시다. print('In total: ', train.shape) train.head() # + _cell_guid="7e3b313a-c653-4af4-8d6f-5b3422a4eb4e" _execution_state="idle" _uuid="8e05ee04d06d431cdd8bac1d08c1ba3a1171f87b" # first glance at the train set: head and tail print("In total: ", train.shape) train.head(5) # - # ##### Column 설명: # - Sales : 해당 일자에 대한 매출 (우리가 예측하고자 하는 Target 변수입니다.) # - Customers : 해당 일자에 대한 손님 수 # - Open : 해당 일자에 대한 매장의 영업 유무 # - Promo : 해당 일자에 대한 행사 유무 # - StateHoliday : 해당 일자에 대한 지역 휴일 유무 (일반적으로 대부분의 매장은 지역 휴일에 영업을 하지 않습니다.) # - SchoolHoliday : 공립 학교들의 휴교 유무 (매장이나 일자가 해당 Feature의 영향을 받기도 합니다.) # # 우리는 시계열 데이터를 다루고 있습니다. 따라서 추후 분석을 위해 데이터를 잘 가공해야합니다. 또한, 데이터 셋에서 'Sales', 'Customers' 변수가 높은 상관관계를 가지고 있기 때문에 두개를 조합하여 새로운 변수를 생성할 예정입니다. # # + [markdown] _cell_guid="dffb7252-18e1-4923-9670-c08dfe2bcb9d" _uuid="230f569230fbfb28545ca465cff06f898f7f924f" # ##### Short description: # - Sales: the turnover for any given day (target variable). # - Customers: the number of customers on a given day. # - Open: an indicator for whether the store was open: 0 = closed, 1 = open. # - Promo: indicates whether a store is running a promo on that day. # - StateHoliday: indicates a state holiday. Normally all stores, with few exceptions, are closed on state holidays. # - SchoolHoliday: indicates if the (Store, Date) was affected by the closure of public schools. # + [markdown] _cell_guid="1185c5cb-cbeb-46c3-92d3-968946864adf" _uuid="47ccbc3bb09d565d6c0f9c49187ab7af2b416c55" # We are dealing with time series data so it will probably serve us to extract dates for further analysis. We also have two likely correlated vaiables in the dataset, which can be combined into a new feature. # + # 데이터 추출 train['Year'] = train.index.year train['Month'] = train.index.month train['Day'] = train.index.day train['WeekOfYear'] = train.index.weekofyear # 새로운 변수 생성 train['SalePerCustomer'] = train['Sales']/train['Customers'] # 통계적인 정보를 출력해주는 함수 train['SalePerCustomer'].describe() # + _cell_guid="467b3abf-5989-479e-b225-db756e47555e" _uuid="ef2dff1b35a8b2659652c8f1ce00c9c3581da2e7" # data extraction train['Year'] = train.index.year train['Month'] = train.index.month train['Day'] = train.index.day train['WeekOfYear'] = train.index.weekofyear # adding new variable train['SalePerCustomer'] = train['Sales']/train['Customers'] train['SalePerCustomer'].describe() # - # 데이터셋에 매출이 0인 날이 존재하더라도 고객이 평균적으로 하루에 9.50$를 사용한다는 것을 알 수 있습니다. # + [markdown] _cell_guid="039fb816-6644-4494-a52a-79db3e5beab8" _uuid="ee4a0e393e3d8a406b229be42b85574c316c5083" # On average customers spend about 9.50$ per day. Though there are days with Sales equal to zero. # - # ### ECDF: 경험적 누적 분포 함수 # # 연속 변수들의 특징들을 살펴보기위해 ECDF을 확인해 봅시다. # # * 데이터 분포의 적합(fit)을 평가하거나 서로 다른 여러 표본 분포를 비교할 때 사용합니다. 그리고 표본으로부터 모집단 백분위수를 추정할 수 있습니다. 본래 경험적 누적 분포 함수의 형태는 이산적인 계단 형태의 그래프를 나타내지만, Rossman Store Sales DataSet에서는 데이터의 개수가 많아 부드러운 곡선의 형태로 보입니다. # [출처](https://blog.naver.com/jiehyunkim/220952781097) # # # + [markdown] _cell_guid="b086e424-d33d-43e7-940c-e7d5a0c61025" _uuid="a7b2a4d13c34397eb06e10914546f54d411ccf0d" # ### ECDF: empirical cumulative distribution function # + [markdown] _cell_guid="218c2f61-19c1-4050-8cb1-c9d8d41c3122" _uuid="06a03cce9c76bcc28021cd0264dbdfb719d7ed38" # To get the first impression about continious variables in the data we can plot ECDF. # + # seaborn 설정 sns.set(style = 'ticks') # 형태 c = '#386B7F'# 색 plt.figure(figsize = (12,12)) # 크기 # 매출의 ECDF plt.subplot(311) # plot의 위치, 3개의 행에서 첫번째 행 cdf = ECDF(train['Sales']) # 매출의 ecdf 얻기 plt.plot(cdf.x, cdf.y, label = 'statmodels', color = c); # 그리기 plt.xlabel('Sales'); # x축 이름 plt.ylabel('ECDF'); # y축 이름 # 고객의 ECDF plt.subplot(312) # plot의 위치, 3개의 행에서 두번째 행 cdf = ECDF(train['Customers']) # 고객의 ecdf 얻기 plt.plot(cdf.x, cdf.y , label = 'statmodels', color = c); # 그리기 plt.xlabel('Customers'); # x축 이름 plt.ylabel('ECDF'); #y축 이름 # 새로운 변수 고객 당 매출의 ECDF plt.subplot(313) # plot의 위치, 3개의 행에서 세번째 행 cdf = ECDF(train['SalePerCustomer']) # 고객 당 매출의 ecdf 얻기 plt.plot(cdf.x, cdf.y, label = 'statmodels', color = c); # 그리기 plt.xlabel('SalePerCustomer'); # x축 이름 plt.ylabel('ECDF'); #y축 이름 # + _cell_guid="e43a9b1a-30ff-4d50-9080-8ef9cda5fdf7" _uuid="c37047a96382b99936d10a3920bf775700b0db03" sns.set(style = "ticks")# to format into seaborn c = '#386B7F' # basic color for plots plt.figure(figsize = (12, 12)) plt.subplot(311) cdf = ECDF(train['Sales']) plt.plot(cdf.x, cdf.y, label = "statmodels", color = c); plt.xlabel('Sales'); plt.ylabel('ECDF'); # plot second ECDF plt.subplot(312) cdf = ECDF(train['Customers']) plt.plot(cdf.x, cdf.y, label = "statmodels", color = c); plt.xlabel('Customers'); plt.ylabel('ECDF'); # plot second ECDF plt.subplot(313) cdf = ECDF(train['SalePerCustomer']) plt.plot(cdf.x, cdf.y, label = "statmodels", color = c); plt.xlabel('Sale per Customer'); plt.ylabel('ECDF'); # - # 20%에 가까운 매출, 고객의 값이 0을 나타내고 있습니다. 결측값 전처리가 필요하겠네요. 남은 80%의 매출은 10000보다 작은 값을 가지고 있습니다. 무엇이 매출이 0의 값을 갖도록 하는 걸까요? 단순히 매장이 해당 날짜에 운영을 하지 않았다는 것이 원인인지 알아봅시다. # + [markdown] _cell_guid="577dce58-3945-4561-9497-9e7e04049b89" _uuid="c8b0d6273c12db11e5e04993353fa6aad9cf0ec1" # About 20% of data has zero amount of sales / customers that we need to deal with and almost 80% of time daily amount of sales was less than 1000. So what about zero sales, is it only due to the fact that the store is closed? # - # ### 결측값 # # <strong>매장의 휴업과 일일 매출이 0인 것의 관계</strong> # + [markdown] _cell_guid="f7c3ed93-31ae-4f3e-8bb4-3f2abb55f4b8" _uuid="28d33f9fc89c3c875acd17a9ccb601b915034ffd" # ### Missing values # #### Closed stores and zero sales stores # - # 휴업한 매장 zero_sales = train[(train.Open == 0) & (train.Sales == 0)] print('In total: ', zero_sales.shape) zero_sales.head() # + _cell_guid="1d659f57-0039-4291-ad2c-77828154c7dd" _uuid="9a9eb2c801a4e1584d21c17b597b9d24d01bbae6" # closed stores train[(train.Open == 0) & (train.Sales == 0)].head() # - # 172817개의 매장이 휴업한 기록이 있었습니다. 해당 수치는 관측치의 전체 양에 10%정도 입니다. 편향적인 예측 결과를 피하기 위해 이러한 값들을 버려야 합니다. # # 여기에서, 휴업하지 않았는데 매출이 0인 매장은 무엇이 원인일까요? # + [markdown] _cell_guid="0960522a-2fbd-4480-94c2-a819428b852c" _uuid="9c04d4e401ffebc8750e2695972c11d05a316c70" # There're 172817 closed stores in the data. It is about 10% of the total amount of observations. To avoid any biased forecasts we will drop these values. # # What about opened stores with zero sales? # - # 영업을 했으나 매출이 0인 매장 zero_sales = train[(train.Open != 0) & (train.Sales == 0)] print('In total: ', zero_sales.shape) zero_sales.head() # + _cell_guid="ac085610-1679-495f-b7f5-7bc67240a983" _uuid="a8783c4d1422410bac5e08ae862d9497bb1ec3e8" # opened stores with zero sales zero_sales = train[(train.Open != 0) & (train.Sales == 0)] print("In total: ", zero_sales.shape) zero_sales.head(5) # - # 놀랍게도 영업을 했음에도 매출이 없는 매장이 있었습니다. 54일의 데이터가 있었고 다른 외부 요인이 있을 수 있다는 것을 유추해볼 수 있습니다. # + [markdown] _cell_guid="6c54d654-080d-4762-bb89-7d3eaee57521" _uuid="12ac977ebc6fc68b056204a55482775926dff945" # Interestingly enough, there are opened store with __no sales on working days__. There're only 54 days in the data, so we can assume that there were external factors involved, for example manifestations. # + print('휴업한 매장과 매출이 없는 날은 예측에 포함하지 않기로 합니다.') train = train[(train['Open'] != 0) & (train['Sales'] != 0)] print('In total: ', train.shape) # + _cell_guid="133a16e6-460c-44a2-89d3-0fa1319999fa" _uuid="36ce7f48f6c02866573f67a85be02cbef0e7a612" print("Closed stores and days which didn't have any sales won't be counted into the forecasts.") train = train[(train["Open"] != 0) & (train['Sales'] != 0)] print("In total: ", train.shape) # - # 매장의 정보에 대해 살펴봅시다. # + [markdown] _cell_guid="37265fa7-5e29-44d9-acb0-5515d52f0299" _uuid="486db76f21f026457b2509bb01bfd251783845a9" # What about store information: # - # 매장의 추가적인 정보 store.head() # + _cell_guid="232cfcd1-3ecb-4a82-8ec4-1e39a762ec33" _uuid="61007f42d13f5d99d02522b803be24e3e367837e" # additional information about the stores store.head() # - # - Store: 매장의 고유 번호. # - StoreType: 4개의 다른 매장 형태를 나타내는 지표: a,b,c,d. # - Assortment: 매장을 구분하는 분류 단계: a = basic, b = extra, c = extended. # - CompetitionDistance: 근처의 경쟁 매장과의 거리. # - CompetitionOpenSince[Month/Year]: 경쟁 매장이 개업한지 대략적으로 얼마나 되었는지를 나타내는 기간. # - Promo2: 몇몇의 매장에 대해 행사를 지속하고 있는지를 나타내는 지표: 0 = store is not participating, 1 = store is participating. # - Promo2Since[Year/Week]: 해당 매장이 언제부터 Promo2에 참여하고 있는지를 나타내는 지표 # - PromoInterval: Promo2가 시작되는 간격을 나타낸다. 주어진 월에 행사를 진행한다. # + [markdown] _cell_guid="f2448d8c-34ba-4206-a921-9c5a7abde020" _uuid="aef2fe3bc4ff8347595fbdb8f39611274215a122" # - Store: a unique Id for each store # - StoreType: differentiates between 4 different store models: a, b, c, d # - Assortment: describes an assortment level: a = basic, b = extra, c = extended # - CompetitionDistance: distance in meters to the nearest competitor store # - CompetitionOpenSince[Month/Year]: gives the approximate year and month of the time the nearest competitor was opened # - Promo2: Promo2 is a continuing a promotion for some stores: 0 = store is not participating, 1 = store is participating # - Promo2Since[Year/Week]: describes the year and calendar week when the store started participating in Promo2 # - PromoInterval: describes the consecutive intervals Promo2 is started, naming the months the promotion is started. E.g. "Feb,May,Aug,Nov" means each round starts in February, May, August, November of any given year for that store # - # 결측치 확인 store.isnull().sum() # + _cell_guid="d5bc713f-4a1e-4ae0-9132-80a2a9a829d0" _uuid="1acb3a117d524b5ed6544bcae503d748ff41f6ed" # missing values? store.isnull().sum() # - # 몇몇의 변수들에 대해 결측 값이 존재하므로 전처리가 필요합니다. CompetitionDistance 부터 시작해봅시다. # + [markdown] _cell_guid="25a2875e-9bde-4c12-951c-c7c72ca6f655" _uuid="3d905023b0b5944d6907249ca3c08d9efba5ca43" # We have few variables with missing values that we need to deal with. Let's start with the `CompetitionDistance`. # - #Competitiondistance 변수에 존재하는 결측 값을 봅시다. store[pd.isnull(store.CompetitionDistance)] # + _cell_guid="33602b69-1690-4db2-b224-422d68c7f788" _uuid="6b932d3e7089db078c1b52a8eb6079fe9e8f93ef" # missing values in CompetitionDistance store[pd.isnull(store.CompetitionDistance)] # - # 겉으로 보기에 데이터로부터 특정한 패턴 없이 단순히 결측된 것으로 보입니다. 이러한 경우에는 결측 값을 중간값으로 대체하는 것이 합리적인 방법이 될 수 있습니다. # + [markdown] _cell_guid="9eb6694c-f930-48c1-aed8-e73afe94dabc" _uuid="6233f4fe6550fdfc8d5091efa197575ddb4df4d7" # Apperently this information is simply missing from the data. No particular pattern observed. In this case, it makes a complete sense to replace NaN with the median values (which is twice less that the average). # - # NaN 값을 중간값으로 채우기 # inplace 옵션은 store 변수에 결측 값 처리를 한 데이터를 다시 할당하지 않고, # store 변수가 가진 데이터를 처리하도록 하는 옵션입니다. 검색해보세요! store['CompetitionDistance'].fillna(store['CompetitionDistance'].median(), inplace = True) # + _cell_guid="727130aa-5538-4ae1-92ad-bd9edd83cf30" _uuid="556bbaeea0022573460de3aa1fded02108592cab" # fill NaN with a median value (skewed distribuion) store['CompetitionDistance'].fillna(store['CompetitionDistance'].median(), inplace = True) # - # 이번에는 Promo2SinceWeek 변수의 결측값을 살펴봅시다. 이상하게 느껴지는 부분을 찾으셨나요? # + [markdown] _cell_guid="7e25e9d8-d695-4b43-975a-ff563e461a28" _uuid="7768fff72cd38e96d55c329f33449d86317ea986" # Continuing further with missing data. What about `Promo2SinceWeek`? May it be that we observe unsusual data points? # + _cell_guid="cee66788-e322-49d8-a613-2c9a461e631a" _uuid="ab703f0c379978b9a0c5d9822005a6229a6e946c" # no promo = no information about the promo? _ = store[pd.isnull(store.Promo2SinceWeek)] _[_.Promo2 != 0].shape # - # Promo2의 값이 존재하지 않으면, 정보를 얻을 수 없었습니다. 그래서 우리는 이러한 값들을 0으로 채울 수 있고, 똑같은 방법으로 Competition으로부터 파생된 CompetitionOpenSinceMonth와 CompetitionOpenSinceYear 변수에 대해서도 처리할 수 있습니다. # + [markdown] _cell_guid="6c92faa1-5c1c-4b7b-91db-e80e2defc7ec" _uuid="a5a2d06ab4a4576bcc17aae21646b777e3ade214" # No, if there's no `Promo2` then there's no information about it. We can replace these values by zeros. The same goes for tha variables deducted from the competition, `CompetitionOpenSinceMonth` and `CompetitionOpenSinceYear`. # - # 남은 결측 값들을 0으로 채워봅시다. store.fillna(0, inplace = True) # + _cell_guid="fc054e3d-498d-408c-a5d6-e3f85bdccaf7" _uuid="dd12ed8abd7ed251cec0062ab9f2042f8befc9e7" # replace NA's by 0 store.fillna(0, inplace = True) # + print('훈련 데이터셋과 매장 정보 데이터를 합쳐봅시다.') # 매장 고유번호(Store)에 맞춰 내부 조인을 통해 양쪽에 매칭이 가능한 행에 대해서만 두개의 데이터셋을 합칩니다. # 조인은 두개의 데이터셋을 합치는 연산으로 데이터베이스에서 배울 수 있습니다. 자세한 내용은 검색해 보시길 바랍니다. train_store = pd.merge(train, store, how = 'inner', on = 'Store') print('In total: ', train_store.shape) train_store.head() # + _cell_guid="2a7bcb45-ebe2-495e-88f8-58d7d9b8c13f" _uuid="81cd7d0c91bb7edcf2f21056171ec2b075586921" print("Joining train set with an additional store information.") # by specifying inner join we make sure that only those observations # that are present in both train and store sets are merged together train_store = pd.merge(train, store, how = 'inner', on = 'Store') print("In total: ", train_store.shape) train_store.head() # - # ### 매장의 종류 # # 이번에는 StoreType의 레벨의 차이점을 자세하게 살펴보려고 합니다. 그리고 Sales가 어떤 방법으로 분포되어 있는지 살펴보고자 합니다. # + [markdown] _cell_guid="ee8f7083-17e5-4bde-8ba3-f8984f69db2e" _uuid="c8970b11d7cf26cdf221aa4efb2d5cbf38059cb0" # ### Store types # + [markdown] _cell_guid="f48df70e-ae33-4781-b814-398a9f9930bf" _uuid="0ef10d82edadf2c2708027ebc656d5e5dc2fc0da" # In this section we will closely look at different levels of `StoreType` and how the main metric `Sales` is distributed among them. # + _cell_guid="6bbc4222-42f6-4014-9557-e9198376015c" _uuid="6baa0185e6ca348269e93315056bb9185551e770" train_store.groupby('StoreType')['Sales'].describe() # - # StoreType b 는 가장 높은 매출 평균을 가지고 있지만, 가장 적은 데이터로 구성되어있습니다. 이번에는 StoreType에 따라 가장 잘팔고 고객이 방문하는지 Sales와 Customers의 전체 합을 통해 살펴보도록 합시다. # + [markdown] _cell_guid="252a994a-4e08-43ac-b0be-fe110d780ac7" _uuid="8212db6d7833da3d9f42afaa0aa2d9ec10bfb1a5" # `StoreType` B has the highest average of Sales among all others, however we have much less data for it. So let's print an overall sum of `Sales` and `Customers` to see which `StoreType` is the most selling and crowded one: # + _cell_guid="b8c90288-d1f2-4f2e-a894-b8ab187ff9bb" _uuid="5e9eebdc4a081f11b6fada77b46c6114e22eabc7" train_store.groupby('StoreType')['Customers', 'Sales'].sum() # - # 명확하게 a 타입의 매장이 가장 높은 매출과 고객 수를 보유하고 있었으며, 뒤를 이어 d 타입의 매장이 2번째로 자리잡고 있었습니다. # # 이번에는 행사의 기간을 포함해보는 것은 어떨까요? Seaborn의 facet grid는 기간을 고려해 시각화 하기 좋은 도구입니다. # + [markdown] _cell_guid="54df3ecf-28bb-4807-9b0d-a50a0906eedd" _uuid="874d7c509a8541c620f5edc69a92ff8f03b9f39c" # Clearly stores of type A. `StoreType` D goes on the second place in both `Sales` and `Customers`. # What about date periods? Seaborn's facet grid is the best tool for this task: # - # 매출의 경향 sns.factorplot(data = train_store, x = 'Month', y = 'Sales', col = 'StoreType', # 매장의 타입에 따라 살펴보자 palette = 'plasma', hue = 'StoreType', row = 'Promo', # 행사를 진행하는 주기를 포함시켜보자 color = c) # + _cell_guid="6680fb46-3986-46d7-990a-66f03fc82ea7" _uuid="9677285bdd6595a4273c7d58573875d350ea0967" # sales trends sns.factorplot(data = train_store, x = 'Month', y = "Sales", col = 'StoreType', # per store type in cols palette = 'plasma', hue = 'StoreType', row = 'Promo', # per promo in the store in rows color = c) # - # 고객의 경향을 알아보자 sns.factorplot(data = train_store, x = 'Month', y ='Customers', col = 'StoreType', # 매장의 타입에 따라 살펴보자 palette = 'plasma', hue = 'StoreType', row = 'Promo', # 행사를 진행하는 주기를 포함시켜보자 color = c) # + _cell_guid="00fedbab-47c5-4098-b0f8-547e9d698a66" _uuid="4b6335f69bddb13716011644d2ca02392c1aeade" # sales trends sns.factorplot(data = train_store, x = 'Month', y = "Customers", col = 'StoreType', # per store type in cols palette = 'plasma', hue = 'StoreType', row = 'Promo', # per promo in the store in rows color = c) # - # 모든 형태의 매장이 비슷한 경향을 따른다는 것을 확인할 수 있었지만, 값의 규모의 차이가 행사 유무 및 매장의 형태에 의존성을 보였습니다. # # <strong>이미 모든 부분에서 매출이 크리스마스 연휴가 가까워짐에 따라 증가함을 확인할 수 있었습니다. 시계열 분석 부분에서 다시 계절의 영향과 경향에 대해 이야기 해봅시다.</strong> # + [markdown] _cell_guid="17cd2b7d-0e85-4cb5-b667-3d35862ae185" _uuid="07d7b866eb4feeda249fa3699fdffa64d2a52b51" # All store types follow the same trend but at different scales depending on the presence of the (first) promotion `Promo` and `StoreType` itself (case for B). # # __Already at this point, we can see that Sales escalate towards Christmas holidays. But we'll talk about seasonalities and trends later in the Time Series Analysis section.__ # - # 고객당 매출의 경향을 알아보자 sns.factorplot(data = train_store, x = 'Month', y ='SalePerCustomer', col = 'StoreType', # 매장의 타입에 따라 살펴보자 palette = 'plasma', hue = 'StoreType', row = 'Promo', # 행사를 진행하는 주기를 포함시켜보자 color = c) # + _cell_guid="17de03e5-f1c4-4391-9575-9090ac7075f1" _uuid="0940de0c939bc1994516c75f9cf271a2f3f922b2" # sale per customer trends sns.factorplot(data = train_store, x = 'Month', y = "SalePerCustomer", col = 'StoreType', # per store type in cols palette = 'plasma', hue = 'StoreType', row = 'Promo', # per promo in the store in rows color = c) # - # 이전의 plot에서 b 형태를 가진 매장이 가장 높은 판매량을 보여주었었지만 가장 낮은 고객당 판매량을 보여주었습니다. 가장 높은 고객 당 판매량은 d 형태를 가진 매장으로 관측 되었습니다. 행사가 있을 때에는 12€ 근처, 없을 때에는 10€ 근처 결과를 보여주었습니다. a, c 형태에서는 9€ 의 결과를 보여주었습니다. # # 낮은 고객당 판매량을 가진 b 타입 매장의 손님들은 개수가 적거나 적은 양의 물건들을 구매했다는 것을 알 수 있습니다. 추가적으로 전체 기간을 통틀어 최소한의 판매량과 고객 수를 산출한다는 것을 알 수 있었습니다. # # + [markdown] _cell_guid="a2e13db0-608f-4d09-8014-54a7a38d490e" _uuid="083c6a3ee73e528d57a833660dd97255d655a39c" # Aha! Eventhough the plots above showed `StoreType` B as the most selling and performant one, in reality it is not true. The highest `SalePerCustomer` amount is observed at the `StoreType` D, about 12€ with `Promo` and 10€ without. As for `StoreType` A and C it is about 9€. # # Low `SalePerCustomer` amount for `StoreType` B describes its Buyer Cart: there are a lot of people who shop essentially for "small" things (or in a little quantity). Plus we saw that overall this `StoreType` generated the least amount of sales and customers over the period. # - # customers sns.factorplot(data = train_store, x = 'Month', y = "Sales", col = 'DayOfWeek', # palette = 'plasma', hue = 'StoreType', row = 'StoreType', # per store type in rows color = c) # + _cell_guid="8319e131-53a7-462b-955c-ca061d8b5738" _uuid="d7715f46f381622386d0d6bcf3571d83fa8f072f" # customers sns.factorplot(data = train_store, x = 'Month', y = "Sales", col = 'DayOfWeek', # 요일별로 살펴봅시다 palette = 'plasma', hue = 'StoreType', row = 'StoreType', # 매장의 형태를 추가해 고려해봅시다 color = c) # - # c 형태의 매장은 일요일에 영업을 하지 않는다는 것을 알 수 있습니다. 한가지 재미있는 것은 d 형태의 매장은 10월과 11월의 일요일에는 영업을 하지 않습니다. # # 그러면, 일요일에 영업을 하는 매장들은 어떤곳들이 있을까요? # + [markdown] _cell_guid="b9e087df-f697-4ad3-bf08-cb8035635813" _uuid="ac6b7efdec4299f9b507f52ac99137c556698609" # We see that stores of `StoreType` C are all closed on Sundays, whereas others are most of the time opened. Interestingly enough, stores of `StoreType` D are closed on Sundays only from October to December. # # Bt the way what are the stores which are opened on Sundays? # - # 일요일에 문을 여는 매장들 train_store[(train_store.Open == 1) & (train_store.DayOfWeek == 7)]['Store'].unique() # + _cell_guid="5e53e722-6a26-40b2-80dc-e25778730b8c" _uuid="b8d8927e537091debc913a03ce015fd7a88aaba6" # stores which are opened on Sundays train_store[(train_store.Open == 1) & (train_store.DayOfWeek == 7)]['Store'].unique() # - # 우리의 데이터 분석을 위한 전처리를 마무리하기 위해 마지막으로 영업을 하는 매장들에 대해 경쟁 업체가 개업한 시기, 행사가 시작된 시기를 고려한 새로운 변수를 더하도록 하겠습니다. # + [markdown] _cell_guid="b21e83c0-ff98-441c-b242-08fbc38c2617" _uuid="088bc5998d6dcc46361b3fdb9bcbf39aec63a963" # To complete our preliminary data analysis, we can add variables describing the period of time during which competition and promotion were opened: # + # 경쟁업체의 개업 시기를 고려한 변수 train_store['CompetitionOpen'] = 12 * (train_store.Year - train_store.CompetitionOpenSinceYear) + \ (train_store.Month - train_store.CompetitionOpenSinceMonth) # 행사가 시작된 시기를 고려한 변수 train_store['PromoOpen'] = 12 * (train_store.Year - train_store.Promo2SinceYear) + \ (train_store.WeekOfYear - train_store.Promo2SinceWeek) / 4.0 # 결측값을 0으로 채우기 train_store.fillna(0, inplace = True) # 스토어 형태에 대한 행사 시작시기와 경쟁업체의 개업시기의 평균 train_store.loc[:,['StoreType', 'Sales','Customers','PromoOpen','CompetitionOpen']].groupby('StoreType').mean() # + _cell_guid="0d727532-55da-498f-9a6c-a650365b0a2e" _uuid="0aa951ce339281cbf9f7a141262b1adade4184bc" # competition open time (in months) train_store['CompetitionOpen'] = 12 * (train_store.Year - train_store.CompetitionOpenSinceYear) + \ (train_store.Month - train_store.CompetitionOpenSinceMonth) # Promo open time train_store['PromoOpen'] = 12 * (train_store.Year - train_store.Promo2SinceYear) + \ (train_store.WeekOfYear - train_store.Promo2SinceWeek) / 4.0 # replace NA's by 0 train_store.fillna(0, inplace = True) # average PromoOpen time and CompetitionOpen time per store type train_store.loc[:, ['StoreType', 'Sales', 'Customers', 'PromoOpen', 'CompetitionOpen']].groupby('StoreType').mean() # - # 가장 많은 판매량과 고객수를 보유했었던 a 형태의 매장은 경쟁 매장들이 생겨난지 얼마 안되었었습니다. 대신에 b 형태의 매장은 경쟁 매장들과 오랜시간동안 경쟁을 하고 있었고 또한, 가장 길게 행사를 유지하고 있었습니다. # + [markdown] _cell_guid="69f70f76-2f83-42a7-b46b-0ff992d18d9e" _uuid="6cd1bbac9504ba0bf4c34aa507d8584929067e82" # The most selling and crowded `StoreType` A doesn't appear to be the one the most exposed to competitors. Instead it's a `StoreType` B, which also has the longest running period of promotion. # + [markdown] _cell_guid="d567f4a7-46ca-4881-bbd2-452f9ac7259d" _uuid="1468a1c2f2476c1175a9b711c250b188d496b842" # ### Correlational Analysis # + [markdown] _cell_guid="dfe6f12e-f432-4ced-a113-c30744a53213" _uuid="ce899f9f61a967dba176a76113d00aec8bd9b056" # We are finished with adding new variables to the data, so now we can check the overall correlations by plotting the `seaborn` heatmap: # + _cell_guid="2f53fef0-f709-4b5a-921b-021a78e25066" _uuid="4e7f2b0e3cc7d2eba98634872b359023db2d7a1f" # Compute the correlation matrix # exclude 'Open' variable corr_all = train_store.drop('Open', axis = 1).corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr_all, dtype = np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize = (11, 9)) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr_all, mask = mask, square = True, linewidths = .5, ax = ax, cmap = "BuPu") plt.show() # + [markdown] _cell_guid="0f20659d-608b-451e-9587-d59e0430905c" _uuid="d6cc9f79045d559581632a04700d48a8552ffdbf" # As mentioned before, we have a strong positive correlation between the amount of Sales and Customers of a store. We can also observe a positive correlation between the fact that the store had a running promotion (`Promo` equal to 1) and amount of `Customers`. # # However, as soon as the store continues a consecutive promotion (`Promo2` equal to 1) the number of `Customers` and `Sales` seems to stay the same or even decrease, which is described by the pale negative correlation on the heatmap. The same negative correlation is observed between the presence of the promotion in the store and the day of a week. # + _cell_guid="4b549cdb-9073-44e3-9e55-d3be73677bd0" _uuid="f5e39eb4242fc617470ff39f0b6fac8c661f575b" # sale per customer trends sns.factorplot(data = train_store, x = 'DayOfWeek', y = "Sales", col = 'Promo', row = 'Promo2', hue = 'Promo2', palette = 'RdPu') # + [markdown] _cell_guid="34c53db4-e4d8-46e5-8685-ff3bab810b9a" _uuid="cac5c896dbbfc0554a61076d6744cc2ebfdfc098" # ##### There are several things here: # - In case of no promotion, both `Promo` and `Promo2` are equal to 0, `Sales` tend to peak on Sunday (!). Though we should note that `StoreType` C doesn't work on Sundays. So it is mainly data from `StoreType` A, B and D. # - On the contrary, stores that run the promotion tend to make most of the `Sales` on Monday. This fact could be a good indicator for Rossmann marketing campaigns. The same trend follow the stores which have both promotion at the same time (`Promo` and `Promo2` are equal to 1). # - `Promo2` alone doesn't seem to be correlated to any significant change in the `Sales` amount. This can be also prooved by the blue pale area on the heatmap above. # + [markdown] _cell_guid="5dc423b7-0569-4805-b93b-8abeaa2fd923" _uuid="c8548ebedc38bf7f36f5f22cd92f652cd83b7748" # --- # + [markdown] _cell_guid="92029a40-d511-4bc4-9e0a-c2844066a237" _uuid="a627f9e2193305842b2ae6d78ba253b451f1d26f" # ### Conclusion of EDA # + [markdown] _cell_guid="52c64199-9af5-44dc-a9bd-a4615f9aabd7" _uuid="83d9bce576100392312554c561372e22eed98478" # - The most selling and crowded `StoreType` is A. # # # - The best "Sale per Customer" `StoreType` D indicates to the higher Buyer Cart. To benefit from this fact, Rossmann can consider proposing bigger variety of its products. # # # - Low `SalePerCustomer` amount for `StoreType` B indicates to the possible fact that people shop there essentially for "small" things. Eventhough this `StoreType` generated the least amount of sales and customers over the whole period, it shows a great potential. # # # - Customers tends to buy more on Modays when there's one promotion (`Promo`) and on Sundays when there's no promotion at all (both `Promo` and `Promo1` are equal to 0). # # # - Promo2 alone doesn't seem to be correlated to any significant change in the `Sales` amount. # + [markdown] _cell_guid="ace8c30e-7bcf-4b25-9054-6128abe18254" _uuid="5409f2079ca6dc3b52ca87d140f0616f70ddfc77" # <br> # ## Time-Series Analysis per Store Type # + [markdown] _cell_guid="4718aa67-1bf7-4a9c-9ebf-652d3e0a2781" _uuid="d40a766bed18a78c1b111189a76602df80756277" # What makes a time series different from a regular regression problem? # # - It is time dependent. The basic assumption of a linear regression that the observations are independent doesn’t hold in this case. # - Along with an increasing or decreasing trend, most time series have some form of seasonality trends, i.e. variations specific to a particular time frame. For example, for Christmas holidays, which we will see in this dataset. # + [markdown] _cell_guid="449f5d38-7e71-4dea-92cf-65ca8f303815" _uuid="804c4c36a2b46f14ecb9f899aaa56d50adb03f43" # We build a time series analysis on store types instead of individual stores. The main advantage of this approach is its simplicity of presentation and overall account for different trends and seasonalities in the dataset. # + [markdown] _cell_guid="137a404c-aaa6-4ec7-8fbd-6100b1bad7e7" _uuid="abac6543ac6a34643f54733d5201bddcc5f1ff28" # In this section, we will analyse time series data: its trends, sesonalities and autocorrelation. Usually at the end of the analysis, we are able to develop a seasonal ARIMA (Autoregression Integrated Moving Average) model but it won't be our main focus today. Instead, we try to understand the data, and only later come up with the forecasts using Prophet methodology. # + [markdown] _cell_guid="de7a8aca-dd58-43c4-8520-2e4dfe436b38" _uuid="03a62c8f7efec05278f00e03247b41a865470867" # ### Seasonality # + [markdown] _cell_guid="29490a90-37c7-4ee4-b470-71e96b33a81c" _uuid="33b6a80dcc90d0f57826f644d2dd0774203c6359" # ##### We take four stores from store types to represent their group: # - Store number 2 for `StoreType` A # - Store number 85 for `StoreType` B, # - Store number 1 for `StoreType` C # - Store number 13 for `StoreType` D. # # It also makes sense to downsample the data from days to weeks using the `resample` method to see the present trends more clearly. # + _cell_guid="2410d7f1-8f83-42a7-9641-119f4a6ba45e" _uuid="e37d97e5918982442e141bd647b1f6041065e90a" # preparation: input should be float type train['Sales'] = train['Sales'] * 1.0 # store types sales_a = train[train.Store == 2]['Sales'] sales_b = train[train.Store == 85]['Sales'].sort_index(ascending = True) # solve the reverse order sales_c = train[train.Store == 1]['Sales'] sales_d = train[train.Store == 13]['Sales'] f, (ax1, ax2, ax3, ax4) = plt.subplots(4, figsize = (12, 13)) # store types sales_a.resample('W').sum().plot(color = c, ax = ax1) sales_b.resample('W').sum().plot(color = c, ax = ax2) sales_c.resample('W').sum().plot(color = c, ax = ax3) sales_d.resample('W').sum().plot(color = c, ax = ax4) # + [markdown] _cell_guid="19808aac-1687-4be1-9ed1-a66a766ec577" _uuid="e1b5c51ff08c4db00d40b95f6d42f0ac8128ca04" # Retail sales for `StoreType` A and C tend to peak for the Christmas season and then decline after the holidays. We might have seen the same trend for `StoreType` D (at the bottom) but there is no information from July 2014 to January 2015 about these stores as they were closed. # + [markdown] _cell_guid="1a009654-9d9f-4de8-915b-1739f6d7da2f" _uuid="9977e3db623314e191be33aa1fea3a355acb4cbe" # ### Yearly trend # + [markdown] _cell_guid="efc27857-abb6-4d70-8002-d7b0b652761d" _uuid="11848efdae6e532084b784b4023fc142ea799c4e" # The next thing to check the presence of a trend in series. # + _cell_guid="622f9853-dcd9-411f-b02d-f27f7f483d78" _uuid="31cbcd812ed45fd194216dc530ad793b7667089f" f, (ax1, ax2, ax3, ax4) = plt.subplots(4, figsize = (12, 13)) # monthly decomposition_a = seasonal_decompose(sales_a, model = 'additive', freq = 365) decomposition_a.trend.plot(color = c, ax = ax1) decomposition_b = seasonal_decompose(sales_b, model = 'additive', freq = 365) decomposition_b.trend.plot(color = c, ax = ax2) decomposition_c = seasonal_decompose(sales_c, model = 'additive', freq = 365) decomposition_c.trend.plot(color = c, ax = ax3) decomposition_d = seasonal_decompose(sales_d, model = 'additive', freq = 365) decomposition_d.trend.plot(color = c, ax = ax4) # + [markdown] _cell_guid="b6c2a0d9-ecc0-4428-b9ed-5cb59825eec6" _uuid="ab965f7ef8da78e31c7c582bc7d05cf129fe200c" # Overall sales seems to increase, however not for the `StoreType` C (a third from the top). Eventhough the `StoreType` A is the most selling store type in the dataset, it seems that it cab follow the same decresing trajectory as `StoreType` C did. # + [markdown] _cell_guid="19a39a9a-94bd-426f-b368-ee3174bb9f23" _uuid="f5f9c4d5c110d7bf77a4eb39936d7e4dc3551cfb" # ### Autocorrelaion # + [markdown] _cell_guid="6484122c-8ca7-4163-bcbd-e5acd61d5596" _uuid="ffa9084c0d919ccf4ad7fe908f96981d48128a3b" # The next step in ourtime series analysis is to review Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) plots. # # ACF is a measure of the correlation between the timeseries with a lagged version of itself. For instance at lag 5, ACF would compare series at time instant ‘t1’…’tn’ with series at instant ‘t1-5’…’tn-5’ (t1-5 and tn being end points). # # PACF, on the other hand, measures the correlation between the timeseries with a lagged version of itself but after eliminating the variations explained by the intervening comparisons. Eg. at lag 5, it will check the correlation but remove the effects already explained by lags 1 to 4. # + _cell_guid="ac47d973-e835-499e-81c1-5f94de91fca9" _uuid="5b37785066879f7f5b190e451721e0ad4dbeb55d" # figure for subplots plt.figure(figsize = (12, 8)) # acf and pacf for A plt.subplot(421); plot_acf(sales_a, lags = 50, ax = plt.gca(), color = c) plt.subplot(422); plot_pacf(sales_a, lags = 50, ax = plt.gca(), color = c) # acf and pacf for B plt.subplot(423); plot_acf(sales_b, lags = 50, ax = plt.gca(), color = c) plt.subplot(424); plot_pacf(sales_b, lags = 50, ax = plt.gca(), color = c) # acf and pacf for C plt.subplot(425); plot_acf(sales_c, lags = 50, ax = plt.gca(), color = c) plt.subplot(426); plot_pacf(sales_c, lags = 50, ax = plt.gca(), color = c) # acf and pacf for D plt.subplot(427); plot_acf(sales_d, lags = 50, ax = plt.gca(), color = c) plt.subplot(428); plot_pacf(sales_d, lags = 50, ax = plt.gca(), color = c) plt.show() # + [markdown] _cell_guid="18512b29-7966-4f96-9f5d-ea7d67ce9148" _uuid="523825025c9f8875292094f6748a46c6a0b2249e" # We can read these plots horizontally. Each horizontal pair is for one 'StoreType', from A to D. In general, those plots are showing the correlation of the series with itself, lagged by x time units correlation of the series with itself, lagged by x time units. # # There is at two things common for each pair of plots: non randomnes of the time series and high lag-1 (which will probably need a higher order of differencing d/D). # # - Type A and type B: # Both types show seasonalities at certain lags. For type A, it is each 12th observation with positives spikes at the 12 (s) and 24(2s) lags and so on. For type B it's a weekly trend with positives spikes at the 7(s), 14(2s), 21(3s) and 28(4s) lags. # # # - Type C and type D: # Plots of these two types are more complex. It seems like each observation is coorrelated to its adjacent observations. # + [markdown] _cell_guid="fa5ee8f1-3b65-4338-9e39-d1cda1ef811e" _uuid="b8bf410fb049e377123c7eaa3305a842911997aa" # ## Time Series Analysis and Forecasting with Prophet # #### Forecasting for the next 6 weeks for the first store # + [markdown] _cell_guid="6c04d2f6-bd8f-45e2-b85e-791f7b1ade2a" _uuid="2f6dca745c7dcedf3a3567c0365deaa06d78a389" # The Core Data Science team at Facebook recently published a new procedure for forecasting time series data called [Prophet](https://research.fb.com/prophet-forecasting-at-scale/). It is based on an additive model where non-linear trends are fit with yearly and weekly seasonality, plus holidays. It enables performing [automated forecasting which are already implemented in R](https://www.rdocumentation.org/packages/forecast/versions/7.3/topics/auto.arima) at scale in Python 3. # + _cell_guid="d8d98e16-447d-4590-a3e6-05fa3f59417a" _uuid="c7dfa8eaa64ccdf03e4ff9501799eec497ebbcb2" # importing data df = pd.read_csv("input/train.csv", low_memory = False) # remove closed stores and those with no sales df = df[(df["Open"] != 0) & (df['Sales'] != 0)] # sales for the store number 1 (StoreType C) sales = df[df.Store == 1].loc[:, ['Date', 'Sales']] # reverse to the order: from 2013 to 2015 sales = sales.sort_index(ascending = False) # to datetime64 sales['Date'] = pd.DatetimeIndex(sales['Date']) sales.dtypes # + _cell_guid="39c9d9a3-8036-4ce6-a1ee-c696cb2e3b90" _uuid="efd3d7a8e97fe2c1998e4e590de58802b07dcf7e" # from the prophet documentation every variables should have specific names sales = sales.rename(columns = {'Date': 'ds', 'Sales': 'y'}) sales.head() # + _cell_guid="f2b15a8a-c7be-4135-a8c2-3f6f0b28ba41" _uuid="fd790644cbd4c52ff936811cfa729b92a2c208ed" # plot daily sales ax = sales.set_index('ds').plot(figsize = (12, 4), color = c) ax.set_ylabel('Daily Number of Sales') ax.set_xlabel('Date') plt.show() # + [markdown] _cell_guid="c3595dde-c625-4412-a242-08030b3d10e0" _uuid="181e060488627fb055399998a50f759538ecb441" # ### Modeling Holidays # + [markdown] _cell_guid="d6b49863-b009-4e68-a4c9-916f1fc56739" _uuid="d99df9e8487b8feff43d940fc8d436f2c236b501" # Prophet also allows to [model for holidays](https://facebookincubator.github.io/prophet/docs/holiday_effects.html), and that's what we do here. # # The StateHoliday variable in the dataset indicates a state holiday, at which all stores are normally closed. There are also school holidays in the dataset at which ceratin stores are also closing their doors. # + _cell_guid="418c4d33-08f3-484e-9ff1-79b00b19310a" _uuid="4125c9ab0bc590b7050424c6e019003f85a634f9" # create holidays dataframe state_dates = df[(df.StateHoliday == 'a') | (df.StateHoliday == 'b') & (df.StateHoliday == 'c')].loc[:, 'Date'].values school_dates = df[df.SchoolHoliday == 1].loc[:, 'Date'].values state = pd.DataFrame({'holiday': 'state_holiday', 'ds': pd.to_datetime(state_dates)}) school = pd.DataFrame({'holiday': 'school_holiday', 'ds': pd.to_datetime(school_dates)}) holidays = pd.concat((state, school)) holidays.head() # + _cell_guid="796c0e2a-c659-4c18-991c-2b209a767ee4" _uuid="ec11e1fcbb72d098a8e8c28657888243746f3fcf" # set the uncertainty interval to 95% (the Prophet default is 80%) my_model = prophet(interval_width = 0.95, holidays = holidays) my_model.fit(sales) # dataframe that extends into future 6 weeks future_dates = my_model.make_future_dataframe(periods = 6*7) print("First week to forecast.") future_dates.tail(7) # + _cell_guid="10fd6159-7dbe-44cf-87d5-e329eb5a58e2" _uuid="003bccbae7c43979f072e63b680739af28809787" # predictions forecast = my_model.predict(future_dates) # preditions for last week forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail(7) # + [markdown] _cell_guid="e7702a93-c542-453b-b1ef-c780f7a6578e" _uuid="7485d14e593ca99bca724014611c67812acb37bf" # The forecast object here is a new dataframe that includes a column yhat with the forecast, as well as columns for components and uncertainty intervals. # + _cell_guid="0a2432f9-9bd7-4b36-9141-43447c4ca448" _uuid="ee3bd97dd2ebcd301fd09e6c78077e0b803ed208" fc = forecast[['ds', 'yhat']].rename(columns = {'Date': 'ds', 'Forecast': 'yhat'}) # + [markdown] _cell_guid="300e9a3a-bf0d-43e4-aac3-e9db12f2c500" _uuid="58c59894209b070ed5ac8b3d1369bbf76cd265ee" # Prophet plots the observed values of our time series (the black dots), the forecasted values (blue line) and the uncertainty intervals of our forecasts (the blue shaded regions). # + _cell_guid="0de56903-5ab2-45d4-99b0-10ba497006b9" _uuid="c2b553f8d2b69ee465b8f2a34a9e90182d258daa" # visualizing predicions my_model.plot(forecast); # + [markdown] _cell_guid="48185ddd-8f02-4e11-a140-346e6e6c5552" _uuid="199a605832653b53d21be752fb70d94571167bb7" # As we see Prophet catches the trends and most of the time gets future values right. # # One other particularly strong feature of Prophet is its ability to return the components of our forecasts. This can help reveal how daily, weekly and yearly patterns of the time series plus manyally included holidayes contribute to the overall forecasted values: # + _cell_guid="26fded30-65b3-425f-9bc0-695fd2e51742" _uuid="ebb2326cfd275eb2dc26d2a191dfe7935ac59cfa" my_model.plot_components(forecast); # + [markdown] _cell_guid="93d3d3aa-d0f6-4be3-8897-63242cd6db67" _uuid="dd7c1221d2de07fd847da4ab4dea28ec57fee9c1" # The first plot shows that the monthly sales of store number 1 has been linearly decreasing over time and the second shows the holiays gaps included in the model. The third plot highlights the fact that the weekly volume of last week sales peaks towards the Monday of the next week, while the forth plot shows that the most buzy season occurs during the Christmas holidays. # + [markdown] _cell_guid="8b237378-130a-4c83-8fff-f33904b441b5" _uuid="2e2fedfeb67d98f7a38f29725a1640fccc21dd6e" # ### Conclusion of Time Series forecasting # + [markdown] _cell_guid="e70ba3c4-62cf-4070-b821-ef6d4a919f64" _uuid="032d12f318ea0dcfd985f3474087ece16c3e9c8c" # During this part, we discussed time series analysis with `.seasonal_decompose()`, `ACF` and `PCF` plots and fitted forecasting model using a new procedure by Facebook `Prophet`. # # We can now present main advantages and drawbacks of time series forecasting: # # ##### __Advantages__ # - A powerful tool for the time series forecasting as it accounts for time dependencies, seasonalities and holidays (Prophet: manually). # - Easily implemented with R `auto.arima()` from `forecast` package, which runs a complex grid search and sophisticated algorithm behind the scene. # # ##### __Drawbacks__ # - Doesn't catch interactions between external features, which could improve the forecasting power of a model. In our case, these variables are `Promo` and `CompetitionOpen`. # - Even though Prophet offers an automated solution for ARIMA, this methodology is under development and not completely stable. # - Fitting seasonal ARIMA model needs 4 to 5 whole seasons in the dataset, which can be the biggest drawback for new companies. # - Seasonal ARIMA in Python has 7 hyper parameters which can be tuned only manually affecting significantly the speed of the forecasting process. # + [markdown] _cell_guid="7ebea717-4d41-45bb-83be-d1a0ad31ab2d" _execution_state="idle" _uuid="4fe85fa76965083814a90beabd36ad07266be574" # **Want to see more of Kernels like this one? Leave an upvote then :)**
46,883
/Session 3/Session_3_GTD_Structured Data.ipynb
496c88b268d9c53c9cb791c976a53f76c44186a5
[ "MIT" ]
permissive
TinaRekhi/applications
https://github.com/TinaRekhi/applications
0
0
null
2020-09-08T23:27:09
2020-09-02T10:09:13
null
Jupyter Notebook
false
false
.py
87,064
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 [ae-tf] # language: python # name: python3-ae-tf # --- from viska.eval_rec import * # %load_ext autoreload # %autoreload 2 PROJECT_PATH = '../' TRAIN_PATH = '/scratch/ceph/swei20/data/pfsspec/train/ae/dataset/bosz/nowave/norm_mr_100k' TEST_PATH = '/scratch/ceph/swei20/data/pfsspec/train/ae/dataset/bosz/nowave/norm_mr_10k' # + language="javascript" # IPython.OutputArea.prototype._should_scroll = function(lines) { # return false; # } # - # %pylab inline import os import sys import h5py import numpy as np import matplotlib.pyplot as plt # Allow load project as module sys.path.insert(0, PROJECT_PATH) os.environ['PFSSPEC_DATA'] = r'/scratch/ceph/dobos/data/pfsspec' os.environ['PYSYN_CDBS'] = os.path.join(os.environ['PFSSPEC_DATA'], 'cdbs') # + # from pfsspec.data.dataset import Dataset # from pfsspec.obsmod.spectrum import Spectrum # - from viska.plot_svd import * os.environ["CUDA_VISIBLE_DEVICES"] = "0" import tensorflow as tf # tf.enable_v2_behavior() gpus = tf.config.list_physical_devices('GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) gpus 4096**2 / 1e7 # # Load dataset and plot some examples with h5py.File(TRAIN_PATH, 'r') as f: t_flux = f['flux'][()] # t_x = f['X'][()] with h5py.File(TRAIN_PATH, 'r') as f: tS100k = f['S'][()] with h5py.File(TEST_PATH, 'r') as f: v_flux = f['flux'][()] log_flux = np.log(t_flux) log_flux.mean(), log_flux.std() log_flux.min(), log_flux.max() norm_flux = -get_norm(log_flux, axis = None) ae_norm_flux = get_min_max_norm(norm_flux) SAVE_PATH = '/scratch/ceph/swei20/data/ae/dataset/test/flux.h5' v_norm_flux = -get_norm(np.log(v_flux), axis = None) ae_v_norm_flux = get_min_max_norm(v_norm_flux) import h5py with h5py.File(SAVE_PATH, 'w') as f: f.create_dataset('flux100kmm', data = flux100kmm) f.create_dataset('flux10kmm', data = flux10kmm) f.create_dataset('flux100k', data = flux100k) f.create_dataset('flux10k', data = flux10k) f.create_dataset('flux100kms', data = norm_flux) f.create_dataset('flux100kmsmm', data = ae_norm_flux) f.create_dataset('flux10kms', data = v_norm_flux) f.create_dataset('flux10kmsmm', data = ae_v_norm_flux) flux100k = prepro(t_flux, minmax=0) flux10k = prepro(v_flux, minmax=0) flux100kmm = prepro(t_flux, minmax=1) flux10kmm = prepro(v_flux, minmax=1) flux100kmm.shape flux10kmm = prepro(v_flux, minmax=1) AE # ### AE norm input from viska.AE import AE input_dim, latent_dim = 4096, 32 hidden_units = [128] # hidden_units = [] # reg1 = 1e-5 reg1 = None encoder_dp=0.2 lr = 0.001 loss='mse' model= AE(input_dim, latent_dim, hidden_units, reg1=reg1, encoder_dp=encoder_dp, loss=loss, lr=lr) svs0, covs0 = analyze(model.ae) # x_train = flux100k x_train = -norm_flux # x_train = ae_norm_flux model.fit(x_train, ep=200) model.ae.loss # ep25 loss: 0.0025 - val_loss: 0.0016 x_test = -norm_flux _,_,_ = get_ae_errs(x_test, model) plot_ww0(svs0, covs0, model) from viska.AE import AE input_dim, latent_dim = 4096, 32 hidden_units = [128] # hidden_units = [] # reg1 = 1e-5 reg1 = None encoder_dp=0.2 lr = 0.001 loss='mae' model= AE(input_dim, latent_dim, hidden_units, reg1=reg1, encoder_dp=encoder_dp, loss=loss, lr=lr) svs0, covs0 = analyze(model.ae) # x_train = flux100k x_train = -norm_flux # x_train = ae_norm_flux model.fit(x_train, ep=200) # mae , loss: 0.0025 - val_loss: 0.0016 x_test = -norm_flux _,_,_ = get_ae_errs(x_test, model) plot_ww0(svs0, covs0, model) # mse , loss: 0.0025 - val_loss: 0.0016 x_test = -norm_flux _,_,_ = get_ae_errs(x_test, model) plot_ww0(svs0, covs0, model) # ### AE norm input from viska.AEn import NormAE input_dim, latent_dim = 4096, 32 hidden_units = [128] # hidden_units = [] # reg1 = 1e-5 reg1 = None encoder_dp=0.2 lr = 0.001 loss='mae' model= NormAE(input_dim, latent_dim, hidden_units, reg1=reg1, encoder_dp=encoder_dp, loss=loss, lr=lr) svs0, covs0 = analyze(model.ae) # + # # x_train = flux100k # x_train = -norm_flux # # x_train = ae_norm_flux # model.fit(x_train, ep=200) # - # loss: 0.0078 - val_loss: 0.0065 -- > loss: 0.0061 - val_loss: 0.0061 # input = ae_norm_flux x_test = ae_norm_flux _,_,_ = get_ae_errs(x_test, model) plot_ww0(svs0, covs0, model) # loss: 0.0078 - val_loss: 0.0065 -- > loss: 0.0061 - val_loss: 0.0061 # input = ae_norm_flux x_test = ae_norm_flux _,_,_ = get_ae_errs(x_test, model) plot_ww0(svs0, covs0, model) # ### DAE dim = 32 # ''' # dumb AE with one layer from viska.DAE_mae import DumbAE latent_dim = 4096 encoder_dp = 0.2 reg1 = 0.00001 lr=0.01 # hidden_units = [128] ae = DumbAE(latent_dim, encoder_dp, reg1, lr) x_train = norm_flux ae.fit(x_train, ep=500) _,_,_ = get_ae_errs(x_train, ae) svs_train = get_svs(ae, svs=[]) plot_esd(svs_init, svs_train=svs_train) # loss: 0.0387 - val_loss: 0.0255 _,_,_ = get_ae_errs(x_train, ae) svs_train = get_svs(ae, svs=[]) plot_esd(svs_init, svs_train=svs_train) ae.fit(x_train, ep=50) # loss: 0.0387 - val_loss: 0.0255 _,_,_ = get_ae_errs(x_train, ae) svs_train = get_svs(ae, svs=[]) plot_esd(svs_init, svs_train=svs_train) # + # svs_init = get_svs(ae, svs=[]) # + # plot_esd(svs_init, svs_train=None) # + jupyter={"outputs_hidden": true} x_train = norm_flux ae.fit(x_train, ep=300) # - # 4096-4096-4096 dp =0.2/0.2 reg1=1e-5 lr=0.01 # loss: 0.0062 - val_loss: 0.0044 _,_,_ = get_ae_errs(x_train, ae) svs_train = get_svs(ae, svs=[]) plot_esd(svs_init, svs_train=svs_train) _ = ww_4096(ae, rds) svs_train # 4096-4096-4096 dp =0.2/0.2 reg1=None lr=0.01 # loss: 0.0062 - val_loss: 0.0044 _,_,_ = get_ae_errs(x_train, ae) svs_train = get_svs(ae, svs=[]) plot_esd(svs_init, svs_train=svs_train) # 4096-4096-4096 dp =0.1/0. reg1=None lr=0.01 # loss: 0.0077 - val_loss: 0.0044 _,_,_ = get_ae_errs(x_train, ae) svs_train = get_svs(ae, svs=[]) plot_esd(svs_init, svs_train=None) plot_esd(svs_init, svs_train=svs_train) _ = ww_4096(ae, rds) # 4096-4096-4096 dp =0.2/0. reg1=0 lr=0.001 # loss: 0.0075 - val_loss: 0.0045 _,_,_ = get_ae_errs(x_train, ae) _ = ww_4096(ae, rds) # 4096-4096-4096 dp =0.2/0.1 reg1=0 lr=0.001 # loss: 0.0075 - val_loss: 0.0045 _,_,_ = get_ae_errs(x_train, ae) _ = ww_4096(ae, rds) # 4096-4096-4096 dp0.1 / 0.05 reg1=0 lr=0.01 # loss: 0.0075 - val_loss: 0.0045 _,_,_ = get_ae_errs(x_train, ae_m32_dp01) _ = ww_4096(ae_m32_dp01, rds) # 4096-4096-4096 dp=0.1 reg1=0 lr=0.01 l = 0.00 # 20s 4ms/step - loss: 0.0075 - val_loss: 0.0045 _,_,_ = get_ae_errs(x_train, ae_m32) svs = ww_4096(ae_m32, rds) # 4096-4096-4096 dp=0.05 reg1=0 lr=0.01 l = 0.00 # 20s 4ms/step - loss: 0.0075 - val_loss: 0.0045 _,_,_ = get_ae_errs(x_train, ae_m32) # 4096-4096-4096 dp=0.05 reg1=0 lr=0.01 l = 0.00 # 20s 4ms/step - loss: 0.0075 - val_loss: 0.0045 _,_,_ = get_ae_errs(x_train, ae_m32) ae_m32. # 4096-4096-4096 dp=0 reg1=0 l = 0.0090 _,_,_ = get_ae_errs(x_train, ae_m32) #17M ae_v32,_,_ = get_ae_errs(x_train, ae_m32) # 19M ae_v32,_,_ = get_ae_errs(x_train, ae_m32) # ### DAE dim = 32 from viska.DAE import DumbAE latent_dim = 32 model = DumbAE(latent_dim) # + jupyter={"outputs_hidden": true} # x_train = 1 - x_train m = model m.fit(x_train, ep=25) # - m = model ae_val = m.ae.predict(x_train) # + mse_err = np.sqrt(np.sum((x_train - ae_val)**2, axis = 1)).mean() mae_err = np.sum(abs(x_train - ae_val), axis = 1).mean() mse_err, mae_err # - plot_rec(x_train, ae_val, N=20, label='rec') # + jupyter={"outputs_hidden": true} plot_rec(x_train, ae_val, N=20, label='rec') # - # ### AE dim = 8 from viska.DAE_mae import DumbAE latent_dim = 8 m8 = DumbAE(latent_dim) # + jupyter={"outputs_hidden": true} # x_train = 1 - x_train m8.fit(x_train, ep=30) # + jupyter={"outputs_hidden": true} ae_val8 = get_errs(x_train, m8) # + jupyter={"outputs_hidden": true} ae_pred8 = get_errs(x_test, m8) # + jupyter={"outputs_hidden": true} plot_rec(x_train, ae_val, N=20, label='rec') # - # ### Eval np.max(np.log(v_flux + 1) / np.log(2)) np.max(v_flux +1 ) x_test = 1 - np.log(v_flux + 1) / np.log(2) np.max(x_test), x_test.shape x_test get_e # x_test = 1 - v_flux ae_pred = m8.ae.predict(x_test) err = np.sqrt(np.sum((x_test - ae_pred)**2, axis = 1)).mean() err plot_rec(x_test, ae_pred, N=20, label='rec') # ### PCA with h5py.File(TRAIN_PATH, 'r') as f: u_train = f['U'][()] with h5py.File(TEST_PATH, 'r') as f: pca_test = f['X'][()] latent_dim = 8 U_keep = u_train[:, :latent_dim] pca_pred = (pca_test.dot(U_keep)).dot(U_keep.T) # + mse_err = np.sqrt(np.sum((pca_test - pca_pred)**2, axis = 1)).mean() mae_err = np.sum(abs(pca_test - pca_pred), axis = 1).mean() mse_err, mae_err # - plot_pca_rec(pca_test, pca_pred, N=4) # ### Latent Space # # Calculate principal components and expand on truncated basis # + M = 50 PC = np.dot(X, U[:, 0:M]) PC.shape # + f, axs = plt.subplots(int(np.ceil(M / 4)), 4, figsize=(12, 3 * np.ceil(M / 4))) axs = axs.flatten() for i in range(M - 1): axs[i].plot(PC[:, i], PC[:, i + 1], '.', ms=1) # - # # Reconstruct from truncated basis rflux = np.exp(avg + np.dot(PC, U[:, 0:M].transpose())) rflux.shape # + N = 20 f, axs = plt.subplots(2 * N, 2, figsize=(16, 4 * N), squeeze=False) for i in range(N): axs[2 * i, 0].plot(wave, flux[i, :], lw=0.3) axs[2 * i, 0].plot(wave, rflux[i, :], lw=0.6) axs[2 * i, 1].plot(wave, flux[i, :], lw=0.3) axs[2 * i, 1].plot(wave, rflux[i, :], lw=0.6) # axs[2 * i, 1].set_xlim(8000, 9000) axs[2 * i + 1, 0].plot(wave, (flux[i, :] - rflux[i, :]) / flux[i, :], lw=0.5) axs[2 * i + 1, 1].plot(wave, (flux[i, :] - rflux[i, :]) / flux[i, :], lw=0.5) # axs[2 * i + 1, 1].set_xlim(8000, 9000) # - PC
9,880
/第一次/Patten_Match.ipynb
e1f51dc4c9838cdb04707af5b7025451da097384
[]
no_license
BinZhang109/NLP_assignment
https://github.com/BinZhang109/NLP_assignment
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
41,335
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 基于模式匹配的对话机器人实现 # ### Pattern Match # # 机器能否实现对话,这个长久以来是衡量机器人是否具有智能的一个重要标志。 Alan Turing早在其文中就提出过一个测试机器智能程度的方法,该方法主要是考察人类是否能够通过对话内容区分对方是机器人还是真正的人类,如果人类无法区分,我们就称之为具有”智能“。而这个测试,后来被大家叫做”图灵测试“,之后也被翻拍成了一步著名电影,叫做《模拟游戏》。 # # # # 既然图灵当年以此作为机器是否具备智能的标志,这项任务肯定是复杂的。自从 1960s 开始,诸多科学家就希望从各个方面来解决这个问题,直到如今,都只能解决一部分问题。 目前对话机器人的建立方法有很多,今天的作业中,我们为大家提供一共快速的基于模板的对话机器人配置方式。 # # 此次作业首先希望大家能够读懂这段程序的代码,其次,在此基于我们提供的代码,**能够把它改造成汉语版本,实现对话效果。** # + active="" # Pattern: (我想要A) # Response: (如果你有 A,对你意味着什么呢?) # # Input: (我想要度假) # Response: (如果你有度假,对你意味着什么呢?) # - # 为了实现模板的判断和定义,我们需要定义一个特殊的符号类型,这个符号类型就叫做"variable", 这个"variable"用来表示是一个占位符。例如,定义一个目标: "I want X", 我们可以表示成 "I want ?X", 意思就是?X是一个用来占位的符号。 # # 如果输入了"I want holiday", 在这里 'holiday' 就是 '?X' def is_variable(pat): return pat.startswith('?') and all(s.isalpha() for s in pat[1:]) is_variable('?ddd?') def pat_match(pat,saying): if is_variable(pat[0]): return True elif pat[0]!=saying[0]: return False else: return pat_match(pat[1:],saying[1:]) pat_match('I come from ?x'.split(),'I come from shanghai'.split()) pat_match('I comed from ?x'.split(),'I is comed from is shanghai'.split()) # ### 获得匹配的变量 # 以上的函数能够判断两个 pattern 是不是相符,但是我们更加希望的是获得每个variable对应的是什么值。 # # 我们对程序做如下修改: def pat_match(pat,saying): if is_variable(pat[0]): return pat[0],saying[0] elif pat[0]!=saying[0]: return False else: return pat_match(pat[1:],saying[1:]) pat_match('I come from ?X'.split(),'I come from shanghai'.split()) pat_match('?x equal ?x'.split(),'2+2 equal 4'.split()) # 但是,如果我们的 Pattern 中具备两个变量,那么以上程序就不能解决了,我们可以对程序做如下修改: def pat_match(pat,saying): if not pat or not saying : return [] elif is_variable(pat[0]): return [(pat[0],saying[0])]+pat_match(pat[1:],saying[1:]) elif pat[0] !=saying[0]: return [] else: return pat_match(pat[1:],saying[1:]) pat_match('?x equal ?x'.split(),'2+2 equal 4'.split()) # 如果我们知道了每个变量对应的是什么,那么我们就可以很方便的使用我们定义好的模板进行替换: # 为了方便接下来的替换工作,我们新建立两个函数,一个是把我们解析出来的结果变成一个 dictionary,一个是依据这个 dictionary 依照我们的定义的方式进行替换。 def pat_to_dict(pat): return {k:v for k,v in pat} pat_to_dict(pat_match('?x equal ?x'.split(),'2+2 equal 4'.split())) def subsitite(rule,parse_rule): if not rule : return [] else: return[parse_rule.get(rule[0],rule[0])]+subsitite(rule[1:],parse_rule) # 字典的get方法当只有一个参数时 是返回相应的键对应的值 # 有两个参数时 第一个参数代表 要查找的键值 第二个参数 代表当所查找的键不存在时 默认的返回值 dicts={'zhang':10, 'li':20} he=[dicts.get('wo','wo')] he dicts.get('zhang','zhang') subsitite('I come from ?x , so you come from ?y'.split(),pat_to_dict(pat_match( 'I borned ?x , and my girlfriend borned ?y'.split(),'I borned anhui , and my girlfriend borned bozhou '.split()))) pat_to_dict(pat_match( 'I borned ?x ,and my girlfriend borned ?x'.split(),'I borned anhui ,and my girlfriend borned bozhou '.split())) pat_lsit=pat_match( "I borned ?x ,and my girlfriend borned ?y".split(),"I borned anhui ,and my girlfriend borned bozhou ".split() ) pat_lsit pat_to_dict(pat_lsit) # def pat_to_dict(pat): # return {k:v for k,v in pat} def pat_to_dict(patterns): return {k: v for k, v in patterns} pat_to_dict(pat_lsit) pat_to_dict(pat_match("?X greater than ?Y".split(), "3 greater than 2".split())) pat_match("?X greater than ?Y".split(), "3 greater than 2".split()) ' '.join(subsitite('I ?x people ,girlfriend ?y people'.split(),parse_rule=pat_to_dict(pat_lsit))) pat_to_dict(pat_lsit) get_pattern={ 'I need ?x':['Image you will get ?x soon','why do you need ?x ?'], 'My ?x told me something':['Talk about more about your ?x','how do you think about your ?x'] } import random def get_response(saying,rule): """ >>>please implemment the code to get response as followings: >>>get_response('I need ihpone') >>>Image you will get ?x soon """ for key in get_pattern: result=pat_match(key.split(),saying.split()) if result is not None: break return ' '.join(subsitite(random.choice(get_pattern[key]).split(),parse_rule=pat_to_dict(result))) # + saying='I need iphone' for key in get_pattern: #print(key) result=pat_match(key.split(),saying.split()) #print(result) if result is not None: break print(result) print(key) print(pat_to_dict(result)) #print(random.choice(get_pattern[key])) ' '.join(subsitite(random.choice(get_pattern[key]).split(),parse_rule=pat_to_dict(result))) # - result=get_response('I need iphone',rule=None) result # ### Segment Match # 我们上边的这种形式,能够进行一些初级的对话了,但是我们的模式逐字逐句匹配的, "I need iPhone" 和 "I need ?X" 可以匹配,但是"I need an iPhone" 和 "I need ?X" 就不匹配了,那怎么办? # # 为了解决这个问题,我们可以新建一个变量类型 "?\*X", 这种类型多了一个星号(\*),表示匹配多个 # 首先,和前文类似,我们需要定义一个判断是不是匹配多个的variable def is_pattern_segment(pattern): return pattern.startswith('?*') and all(a.isalpha() for a in pattern[2:]) is_pattern_segment('?*dddddd?') from collections import defaultdict # 然后我们把之前的 pat_match程序改写成如下, 主要是增加了 is_pattern_segment的部分. # + def segment_match(pattern,saying): seg_pat,rest=pattern[0],pattern[1:] seg_pat=seg_pat.replace('?*','?') if not rest : return (seg_pat,saying),len(saying) for i,token in enumerate(saying): if rest[0]==token and is_match(rest[1:],saying[(i+1):]): return (seg_pat,saying[:i]),i return (seg_pat,saying),0 def is_match(rest,saying): if not rest and not saying: return True if not all(a.isalpha() for a in rest[0] ): return True if rest[0] != saying[0]: return False return is_match(rest[1:],saying[1:]) # - fail=False def pat_match_with_seg(pattern,saying): if not pattern or not saying: return [] pat=pattern[0] if is_variable(pat): return [(pat,saying[0])]+pat_match_with_seg(pattern[1:],saying[1:]) elif is_pattern_segment(pat): match,index=segment_match(pattern,saying) if index != 0: return [match] + pat_match_with_seg(pattern[1:], saying[index:]) else: return fail elif pat==saying[0]: return pat_match_with_seg(pattern[1:],saying[1:]) else : return fail # 这段程序里比较重要的一个新函数是 segment_match,这个函数输入是一个以 segment_pattern开头的模式,尽最大可能进行,匹配到这个边长的变量对于的部分。 segment_match('?*P is very good'.split(), "My dog and my cat is very good".split()) pat_match_with_seg('?*P is very good'.split(), "My dog and my cat is very good".split()) pat_match_with_seg('?*P is very good and ?*X'.split(), "My dog is very good and my cat is very cute".split()) rule_pattern={'?*x hello ?*y': ['How do you do', 'Please state your problem'], '?*x I want ?*y': ['what would it mean if you got ?y', 'Why do you want ?y', 'Suppose you got ?y soon'], "I was ?*X": ["Were you really ?X ?", "I already knew you were ?X ."]} def pat_to_dict1(pattern): return {k: ' '.join (v) if isinstance(v,list) else v for k,v in pattern} pat_to_dict1(pat_match_with_seg('?*P is very good and ?*X'.split(), "My dog is very good and my cat is very cute".split())) ' '.join(subsitite(rule='?P is very cute , ?X but I dislike it!'.split(), parse_rule=pat_to_dict1( pat_match_with_seg('?*P is very good and ?*X'.split(), "My dog is very good and my cat is very cute".split())))) segment_match('?*x I want ?*y','xiao ming I wants oppo') ' '.join(subsitite(rule='?x hello ?y'.split(), parse_rule=pat_to_dict1( pat_match_with_seg('?*x is very good and ?*y'.split(), "My dog is very good and my cat is very cute".split())))) pat_match_with_seg('?*x is very good and ?*y'.split(), "My dog is very good and my cat is very cute".split()) pat_match_with_seg('?*x I want ?*y'.split(),'Junh I want apple'.split()) pat_match_with_seg('?*x I want ?*y'.split(),'xiao ming I wants oppo'.split()) rules = { "?*X hello ?*Y": ["Hi, how do you do?"], "I was ?*X": ["Were you really ?X ?", "I already knew you were ?X ."], '?*x I want ?*y': ['what would it mean if you got ?y', 'Why do you want ?y', 'Suppose you got ?y soon'] } get_response1('I am zhangbin!',rules) def get_response1(saying, response_rules): for k in response_rules.keys(): join_pat = pat_match_with_seg(k.split(), saying.split()) #print(join_pat) if not join_pat: #print('下一个') continue return ' '.join(subsitite(random.choice(response_rules[k]).split(), pat_to_dict1(join_pat))) return "对不起,我暂时不能理解您的指令" get_response1('I was xiao ming oppo',rules) import jieba # def cut(string): # return list(jieba.cut(string)) def get_response1(saying,rules): for k,v in rules.items(): join_pat=pat_match_with_seg(cut(k),cut(saying)) print(join_pat) if not join_pat: continue return ' '.join(subsitite(cut(random.choice(v)),pat_to_dict1(join_pat))) return "对不起,我暂时不能理解您的对话" rule_responses = { '?*x你好?*y': ['你好呀', '请告诉我你的问题'], '?*x我想?*y': ['你觉得?y有什么意义呢?', '为什么你想?y', '你可以想想你很快就可以?y了'], '?*x我想要?*y': ['?x想问你,你觉得?y有什么意义呢?', '为什么你想?y', '?x觉得... 你可以想想你很快就可以有?y了', '你看?x像?y不', '我看你就像?y'], '?*x喜欢?*y': ['喜欢?y的哪里?', '?y有什么好的呢?', '你想要?y吗?'], '?*x讨厌?*y': ['?y怎么会那么讨厌呢?', '讨厌?y的哪里?', '?y有什么不好呢?', '你不想要?y吗?'], '?*xAI?*y': ['你为什么要提AI的事情?', '你为什么觉得AI要解决你的问题?'], '?*x机器人?*y': ['你为什么要提机器人的事情?', '你为什么觉得机器人要解决你的问题?'], '?*x对不起?*y': ['不用道歉', '你为什么觉得你需要道歉呢?'], '?*x我记得?*y': ['你经常会想起这个吗?', '除了?y你还会想起什么吗?', '你为什么和我提起?y'], '?*x如果?*y': ['你真的觉得?y会发生吗?', '你希望?y吗?', '真的吗?如果?y的话', '关于?y你怎么想?'], '?*x我?*z梦见?*y':['真的吗? --- ?y', '你在醒着的时候,以前想象过?y吗?', '你以前梦见过?y吗'], '?*x妈妈?*y': ['你家里除了?y还有谁?', '嗯嗯,多说一点和你家里有关系的', '她对你影响很大吗?'], '?*x爸爸?*y': ['你家里除了?y还有谁?', '嗯嗯,多说一点和你家里有关系的', '他对你影响很大吗?', '每当你想起你爸爸的时候, 你还会想起其他的吗?'], '?*x我愿意?*y': ['我可以帮你?y吗?', '你可以解释一下,为什么想?y'], '?*x我很难过,因为?*y': ['我听到你这么说, 也很难过', '?y不应该让你这么难过的'], '?*x难过?*y': ['我听到你这么说, 也很难过', '不应该让你这么难过的,你觉得你拥有什么,就会不难过?', '你觉得事情变成什么样,你就不难过了?'], '?*x就像?*y': ['你觉得?x和?y有什么相似性?', '?x和?y真的有关系吗?', '怎么说?'], '?*x和?*y都?*z': ['你觉得?z有什么问题吗?', '?z会对你有什么影响呢?'], '?*x和?*y一样?*z': ['你觉得?z有什么问题吗?', '?z会对你有什么影响呢?'], '?*x我是?*y': ['真的吗?', '?x想告诉你,或许我早就知道你是?y', '你为什么现在才告诉我你是?y'], '?*x我是?*y吗': ['如果你是?y会怎么样呢?', '你觉得你是?y吗', '如果你是?y,那一位着什么?'], '?*x你是?*y吗': ['你为什么会对我是不是?y感兴趣?', '那你希望我是?y吗', '你要是喜欢, 我就会是?y'], '?*x你是?*y' : ['为什么你觉得我是?y'], '?*x因为?*y' : ['?y是真正的原因吗?', '你觉得会有其他原因吗?'], '?*x我不能?*y': ['你或许现在就能?*y', '如果你能?*y,会怎样呢?'], '?*x我觉得?*y': ['你经常这样感觉吗?', '除了到这个,你还有什么其他的感觉吗?'], '?*x我?*y你?*z': ['其实很有可能我们互相?y'], '?*x你为什么不?*y': ['你自己为什么不?y', '你觉得我不会?y', '等我心情好了,我就?y'], '?*x好的?*y': ['好的', '你是一个很正能量的人'], '?*x嗯嗯?*y': ['好的', '你是一个很正能量的人'], '?*x不嘛?*y': ['为什么不?', '你有一点负能量', '你说 不,是想表达不想的意思吗?'], '?*x不要?*y': ['为什么不?', '你有一点负能量', '你说 不,是想表达不想的意思吗?'], '?*x有些人?*y': ['具体是哪些人呢?'], '?*x有的人?*y': ['具体是哪些人呢?'], '?*x某些人?*y': ['具体是哪些人呢?'], '?*x每个人?*y': ['我确定不是人人都是', '你能想到一点特殊情况吗?', '例如谁?', '你看到的其实只是一小部分人'], '?*x所有人?*y': ['我确定不是人人都是', '你能想到一点特殊情况吗?', '例如谁?', '你看到的其实只是一小部分人'], '?*x总是?*y': ['你能想到一些其他情况吗?', '例如什么时候?', '你具体是说哪一次?', '真的---总是吗?'], '?*x一直?*y': ['你能想到一些其他情况吗?', '例如什么时候?', '你具体是说哪一次?', '真的---总是吗?'], '?*x或许?*y': ['你看起来不太确定'], '?*x可能?*y': ['你看起来不太确定'], '?*x他们是?*y吗?': ['你觉得他们可能不是?y?'], '?*x': ['很有趣', '请继续', '我不太确定我很理解你说的, 能稍微详细解释一下吗?'] } get_response1('我可能你',rules=rule_responses) def cut(string): str_list = [] a_list = list(jieba.cut(string)) for i in range(a_list.count(' ')): a_list.remove(' ') # for i,v in enumerate(a_list): i = 0 try: while(a_list[i]): try: if a_list[i] == '?' and a_list[i+1] == '*': str_list.append(''.join(a_list[i:i+3])) i += 3 elif a_list[i] == '?' and 'x' <= a_list[i+1] <= 'z': str_list.append(''.join(a_list[i:i+2])) i += 2 else: str_list.append(a_list[i]) i += 1 except IndexError: str_list.append(a_list[i]) break except IndexError: pass return str_list # + fail = [True, None] def pat_match_with_seg1(pattern, saying): if pattern and not saying: return fail elif not saying: return [] pat = pattern[0] if is_variable(pat): return [(pat, saying[0])] + pat_match_with_seg(pattern[1:], saying[1:]) elif pat == saying[0]: return pat_match_with_seg(pattern[1:], saying[1:]) elif is_pattern_segment(pat): match, index = segment_match(pattern, saying) return [match] + pat_match_with_seg(pattern[1:], saying[index:]) else: return fail # - def cut_jieba(string): return list(jieba.cut(string)) def cut_jieba(string): return list(jieba.cut(string)) def cut_jieba_pat(string): tmp =[] string_list = string.split() for i in string_list: if i.startswith('?'): tmp.append(i) else: tmp += cut_jieba(i) return tmp import re def replace_pat_string(string): print(re.sub(r'(?P<n1>\?\S)', ' \g<n1> ', string)) return re.sub(r'(?P<n1>\?\S)', ' \g<n1> ', string).split() def get_ZH_response(saying, rules): for i in rules.keys(): # print(pat_match_with_seg(cut_jieba_pat(i), cut_jieba(saying))) if not pat_match_with_seg1(cut_jieba_pat(i), list(jieba.cut(saying)))[-1]: continue return ''.join(subsitite(replace_pat_string(random.choice(rules[i])), pat_to_dict1(pat_match_with_seg1(i.split(), list(jieba.cut(saying)))))) get_ZH_response('就喜欢小贝', rule_responses)
14,193
/道路の生成.ipynb
ae0d6503cd7f479a7bca751fd644085c30917c2a
[]
no_license
GenyaNobuhara/osmnx-make
https://github.com/GenyaNobuhara/osmnx-make
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
418,840
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: xai # language: python # name: xai # --- # # XAI Praca domowa 6 # Mikołaj Pacek # ## 'Fairness' titanica # Do tego zadania użyłem zbioru o ofiarach katastrofy titanica. Kolumna wg, które będziemy badać fairness to płeć. # + import pandas as pd import matplotlib.pyplot as plt df_train = pd.read_csv("train.csv").drop(columns=["PassengerId"]) df_train = df_train.drop(columns=["Name", "Cabin", "Ticket"]) df_train = df_train.dropna() df_train = pd.get_dummies(df_train) def split(df_train): return df_train.drop(columns=["Survived"]), df_train["Survived"] == 1 x_train, y_train = split(df_train) # - from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score, confusion_matrix from sklearn.neighbors import KNeighborsClassifier # # Modele # Celem jest modelu jest klasyfikacja czy pasażer przeżył. Wybrałem 4 modele klasyfikujące: # - RandomForestClassifier # - GradientBoostingClassifier # - LogisticRegression # - KNeighborsClassifier # # Do tego będziemy patrzeć na 3 metryki fairness: # - statisical_parity # - equal_opportunity # - predictive_equality # + import numpy as np import copy from tqdm import tqdm import itertools def get_fairness_stats(model, x, y): tn, fp, fn, tp = confusion_matrix(model.predict(x), y).ravel() return { "statisical_parity": (tp + fp) / sum([tn, fp, fn, tp]), "equal_opportunity": tp / (tp + fn), "predictive_equality": fp / (fp + tn), } models = { "random forest": RandomForestClassifier, "logistic": LogisticRegression, "gradient boost": GradientBoostingClassifier, 'KNN': KNeighborsClassifier, } model_to_stats = {} for model_name, model_cls in models.items(): model = model_cls() model.fit(x_train, y_train) df_male = df_train[df_train["Sex_male"] == 1] df_female = df_train[df_train["Sex_female"] == 1] male_stats = get_fairness_stats(model, *split(df_male)) female_stats = get_fairness_stats(model, *split(df_female)) model_to_stats[model_name] = {"male": male_stats, "female": female_stats} # + import seaborn as sns sns.set_style("darkgrid") import matplotlib.pyplot as plt df = pd.DataFrame() for algo, stats in model_to_stats.items(): for sex, metric_to_value in stats.items(): for metric, value in metric_to_value.items(): df = df.append({ "algo": algo, "metric": metric, "sex": sex, "value": value }, ignore_index=True) fig, ax = plt.subplots(3, 1, figsize=(5, 15)) for i, metric in enumerate(df["metric"].unique()): sns.barplot(data=df[df["metric"].eq(metric)], x="algo", y="value", hue="sex", ax=ax[i]) ax[i].set_title(metric) fig.tight_layout() # - # # Wnioski # - wytrenowane model 'dyskryminują' mężczyzn (prawie zawsze metryka jest większa dla kobiet) # - tylko metryka equal_opportunity wydaje się być satysfakcjonowana pod względem fairness # - bias w kierunku mężczyzn może wynikać z tego, że więcej kobiet przetrwało katastrofę = [] con_list = [] for con in list(life_exp.Country.unique()): for col in life_exp.columns.to_list(): if life_exp.loc[life_exp['Country'] == con][col].isnull().sum() > 0: fill_list.append(col) con_list.append(con) # - con_list = list(np.unique(con_list)) fill_list = list(np.unique(fill_list)) for con in con_list: life_exp.loc[life_exp['Country'] == con] = life_exp.loc[life_exp['Country'] == con].interpolate(limit_direction='both') life_exp.isnull().sum()*100/life_exp.isnull().count() # + fill_list2 = [] con_list2 = [] for con in list(life_exp.Country.unique()): for col in life_exp.columns.to_list(): if life_exp.loc[life_exp['Country'] == con][col].isnull().sum() > 0: fill_list2.append(col) con_list2.append(con) # - con_list2 = list(np.unique(con_list2)) fill_list2 = list(np.unique(fill_list2)) con_list2 fill_list2 len(fill_list) len(fill_list2) len(con_list) len(con_list2) sum(life_exp.isnull().sum()) life_exp.info() life_exp1 = life_exp.dropna() sum(life_exp1.isnull().sum()) life_exp1.info() life_exp.dropna(inplace=True) len(life_exp.Country.unique()) list_1 = life_exp.columns.to_list() list_1.remove('Country') list_1.remove('Status') list_1 list_2 = list(range(1,len(list_1)+1)) list_2 col_dict = dict(zip(list_1, list_2)) col_dict # + plt.figure(figsize=(15,20)) sns.set(font_scale=1.3) sns.set_style('white') for var,i in col_dict.items(): plt.subplot(5,4,i) plt.boxplot(life_exp[var], whis=3) plt.title(var) plt.tight_layout() plt.savefig('boxplots_1.png', bbox_inches="tight", dpi=1400) plt.show() # + plt.figure(figsize=(15,20)) for var,i in col_dict.items(): plt.subplot(5,4,i) plt.hist(life_exp[var]) plt.title(var) plt.tight_layout() plt.savefig('hist_1.png', bbox_inches="tight", dpi=1400) plt.show() # - list_2 = list_1[1:] list_3 = [] for col in list_2: q75, q25 = np.percentile(life_exp[col], [75 ,25]) iqr = q75 - q25 min_val = q25 - (iqr*2) max_val = q75 + (iqr*2) if len((np.where((life_exp[col] > max_val) | (life_exp[col] < min_val))[0]))*100/2128 > 5: list_3.append(col) print("Number of outliers and percentage of it in {} : {} and {}".format(col,\ len((np.where((life_exp[col] > max_val) | (life_exp[col] < min_val))[0])),len(\ (np.where((life_exp[col] > max_val) | (life_exp[col] < min_val))[0]))*100/2128)) list_3 # + plt.figure(figsize=(12,10)) sns.set(font_scale=1.25) ax = sns.heatmap(life_exp[list_2].corr(), linewidth=0.5, cmap='viridis',vmin = -1, vmax=1, \ annot=True, fmt='.0%', annot_kws = {'size': 12, 'color':'black'}) ax.set_ylim(0,19) plt.title('Variable Correlations', size=17) plt.savefig('corr_1.png', bbox_inches="tight", dpi=1400) plt.show() # - life_exp[['Country','Bmi']].sort_values('Bmi')[life_exp['Bmi']>50].Country.unique() life_exp[['Country','Measles']].sort_values('Measles')[life_exp['Measles']>10000].Country.unique() life_exp[['Country','Population']].sort_values('Population')[life_exp['Population']>200000000].Country.unique() life_exp1.drop(['Under-Five Deaths', 'Percentage Expenditure', 'Income Composition Of Resources', 'Bmi',\ 'Thinness 1-19 Years','Measles'],axis=1, inplace=True) life_exp1.columns # + list_4 = life_exp1.columns.tolist()[3:] life_exp2 = life_exp1.copy() # + from scipy.stats.mstats import winsorize list_5 = ['Hepatitis B', 'Diphtheria'] for col in list_5: life_exp2[col] = winsorize(life_exp2[col], (0.15, 0.00)) life_exp2['Polio'] = winsorize(life_exp2['Polio'], (0.10,0.00)) # - list_6 = ['Infant Deaths'] for col in list_6: life_exp2[col] = life_exp2[col]**(1/6) # + list_7 = ['Hiv/Aids', 'Gdp', 'Population'] for col in list_7: life_exp2[col] = np.log(life_exp2[col]) # - list_8 = [] for col in list_4: q75, q25 = np.percentile(life_exp2[col], [75 ,25]) iqr = q75 - q25 min_val = q25 - (iqr*1.5) max_val = q75 + (iqr*1.5) if len((np.where((life_exp2[col] > max_val) | (life_exp2[col] < min_val))[0]))*100/2128 > 5: list_8.append(col) print("Number of outliers and percentage of it in {} : {} and {}".format(col,\ len((np.where((life_exp2[col] > max_val) | (life_exp2[col] < min_val))[0])),len(\ (np.where((life_exp2[col] > max_val) | (life_exp2[col] < min_val))[0]))*100/2128)) # + list_9 = list(range(1,len(list_1)+1)) col_dict2 = dict(zip(list_4, list_9)) col_dict2 # + plt.figure(figsize=(15,20)) sns.set(font_scale=1.3) sns.set_style('white') for var,i in col_dict2.items(): plt.subplot(5,4,i) plt.boxplot(life_exp2[var], whis=3) plt.title(var) plt.tight_layout() plt.savefig('boxplot_updated.png', bbox_inches="tight", dpi=1400) plt.show() # + plt.figure(figsize=(15,20)) for var,i in col_dict2.items(): plt.subplot(5,4,i) plt.hist(life_exp2[var]) plt.title(var) plt.tight_layout() plt.savefig('hist_updated.png', bbox_inches="tight", dpi=1400) plt.show() # - life_exp3 = life_exp2[['Country','Life Expectancy']].groupby(life_exp2['Country']).mean().sort_values(by=['Life Expectancy']) # + sns.set(font_scale=1.25) plt.figure(figsize=(10,35)) sns.barplot(life_exp3["Life Expectancy"], life_exp3.index) plt.title("Life Expectancy per Country") plt.ylabel('') plt.xlabel('') plt.xticks(rotation=90) plt.savefig('life_exp.png', bbox_inches="tight", dpi=1400) plt.show() # + sns.set(font_scale=1.25) plt.figure(figsize=(10,3.5)) sns.barplot(life_exp2["Life Expectancy"], life_exp2["Status"]) plt.title("Life Expectancy by Status") plt.ylabel('') plt.xlabel('') plt.xticks(rotation=90) plt.savefig('life_exp_status.png', bbox_inches="tight", dpi=1400) plt.show() # + life_exp4 = life_exp2[['Year','Life Expectancy']].groupby(life_exp2['Year']).mean() sns.set(font_scale=1.3) plt.figure(figsize=(15,5)) sns.barplot(life_exp4['Year'], life_exp4["Life Expectancy"]) plt.title("Life Expectancy by Year") plt.ylabel('') plt.xlabel('') plt.xticks(rotation=90) plt.savefig('life_exp_year.png', bbox_inches="tight", dpi=1400) plt.show() # + plt.figure(figsize=(15,20)) sns.set(font_scale=1.3) sns.set_style('white') for var,i in col_dict2.items(): plt.subplot(5,3,i) plt.scatter(life_exp2[var], life_exp2['Life Expectancy']) plt.title(var) plt.tight_layout() plt.savefig('life_exp_scatter.png', bbox_inches="tight", dpi=1400) plt.show() # - pd.get_dummies(life_exp2["Status"],drop_first=True) life_exp2 = pd.concat([life_exp2, pd.get_dummies(life_exp2["Status"],drop_first=True)], axis=1) life_exp2.iloc[:,4:] # + plt.figure(figsize=(12,10)) sns.set(font_scale=1.25) ax = sns.heatmap(life_exp2.corr(), linewidth=0.5, cmap='viridis',vmin = -1, vmax=1, \ annot=True, fmt='.0%', annot_kws = {'size': 12, 'color':'black'}) ax.set_ylim(0,15) plt.title('Variable Correlations', size=17) plt.savefig('corr_upd.png', bbox_inches="tight", dpi=1400) plt.show() # + from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler X1 = life_exp2.iloc[:,4:] X2 = StandardScaler().fit_transform(X1) sklearn_pca = PCA(n_components=7) Y = sklearn_pca.fit_transform(X2) print( 'The percentage of total variance in the dataset explained by each', 'component from Sklearn PCA.\n', sklearn_pca.explained_variance_ratio_ ) # - sum(sklearn_pca.explained_variance_ratio_) # + i = np.identity(X1.shape[1]) coef = sklearn_pca.transform(i) # - PCA_df = pd.DataFrame(abs(coef), columns=['PCA-1', 'PCA-2', 'PCA-3', 'PCA-4', 'PCA-5', 'PCA-6', 'PCA-7'], index=X1.columns) PCA_df.sort_values(by=['PCA-1', 'PCA-2', 'PCA-3', 'PCA-4', 'PCA-5', 'PCA-6', 'PCA-7']) # + sns.set(font_scale=1.3) sns.set_style('white') PCA_df.iloc[:,:-1].T.plot(kind='bar', stacked=True, figsize=(10,8)) plt.legend(loc='lower left', bbox_to_anchor=(1, 0), fontsize=15) plt.title('PCA Component Makeup') plt.savefig('pca_1.png', bbox_inches="tight", dpi=1400) plt.show() # - PCA_df['Sum'] = PCA_df['PCA-1']+PCA_df['PCA-2']+PCA_df['PCA-3']+PCA_df['PCA-4']+PCA_df['PCA-5']+PCA_df['PCA-6']+PCA_df['PCA-7'] PCA_TOTAL = PCA_df['Sum'].sort_values() # + sns.set(font_scale=1.3) sns.set_style('white') PCA_TOTAL.plot(kind='bar', figsize=(10,8), color='green') plt.title('Variables in PCA Component Makeup') plt.savefig('pca_2.png', bbox_inches="tight", dpi=1400) plt.show() # - v') for i in range(0, 1): station = [stations.iloc[i,5], stations.iloc[i,6]] G_sec = ox.graph_from_point(station, distance=2000, network_type='drive', simplify=False) #fig, ax = ox.plot_graph(ox.project_graph(G)) hwy_types = ['secondary', 'secondary_link'] gdf_sec = ox.graph_to_gdfs(G_sec, nodes=False) mask = ~gdf_sec['highway'].map(lambda x: isinstance(x, str) and x in hwy_types) edges = zip(gdf_sec[mask]['u'], gdf_sec[mask]['v'], gdf_sec[mask]['key']) G_sec.remove_edges_from(edges) G_sec = ox.remove_isolated_nodes(G_sec) G_sec = ox.simplify_graph(G_sec) fig, ax = ox.plot_graph(ox.project_graph(G_sec)) G_sec_proj = ox.project_graph(G_sec) nodes_sec_proj = ox.graph_to_gdfs(G_sec_proj, edges=False) sec_stats = ox.basic_stats(G_sec_proj, clean_intersects=True, circuity_dist='euclidean') sec_extends_stats = ox.extended_stats(G_sec, ecc=True, bc=True,cc=True)
12,665
/python/.ipynb_checkpoints/tp_son-checkpoint.ipynb
c75ee1d1397baafb63376c91f1d697dd56064011
[]
no_license
remimetzdorff/seconde
https://github.com/remimetzdorff/seconde
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
51,619
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl lamA, lamBH, lamNeb = np.loadtxt('To_Casey.txt', skiprows=1, unpack=True) totalSED = lamBH + lamNeb #testing the plotting of the SED in rest frame fig = plt.subplot(111) fig.loglog(lamA, totalSED, basex=10, basey=10) fig.set_title('Rest Frame SED') fig.set_ylabel('Total SED') fig.set_xlabel('Rest Frame Wavelength') plt.show() #making function to convert to obsered frame's redshift def redshift(wavelength): answer = (wavelength/1216) - 1 return answer #getting the redshift value for the central wavelengths of filters j1_r = redshift(10500) j2_r = redshift(11400) j3_r = redshift(12800) hS_r = redshift(15500) hL_r = redshift(17000) #converting the rest frame to the observed frame of the filters lamA_j1 = lamA*(1+j1_r) lamA_j2 = lamA*(1+j2_r) lamA_j3 = lamA*(1+j3_r) lamA_hS = lamA*(1+hS_r) lamA_hL = lamA*(1+hL_r) # + import os path = os.getcwd() os.listdir(path) filters = open('/Users/seperkins94/Research/DCBH/Data-Extraction-and-Filters/FILTER.RES.DR34.latest', 'r').read() filter1 = filters.find('FOURSTAR/J1') filter2 = filters.find('FOURSTAR/J2') filter3 = filters.find('FOURSTAR/J3') filter4 = filters.find('FOURSTAR/Hshort') filter5 = filters.find('FOURSTAR/Hlong') #assigns the data for J1 filter to a filter1 variable filter1 = open('filter1.txt', 'w') filter1.write(filters[filters.find('FOURSTAR/J1'):filters.find('FOURSTAR/J2')-3]) filter1.close() #assigns the data for J2 filter to a filter2 variable filter2 = open('filter2.txt', 'w') filter2.write(filters[filters.find('FOURSTAR/J2'):filters.find('FOURSTAR/J3')-4]) filter2.close() #assigns the data for J3 filter to a filter3 variable filter3 = open('filter3.txt', 'w') filter3.write(filters[filters.find('FOURSTAR/J3'):filters.find('FOURSTAR/H_cam')-3]) filter3.close() #assigns the data for HS filter to a filter4 variable filter4 = open('filter4.txt', 'w') filter4.write(filters[filters.find('FOURSTAR/Hshort'):filters.find('FOURSTAR/Ks')-3]) filter4.close() #assigns the data for Hl filter to a filter5 variable filter5 = open('filter5.txt', 'w') filter5.write(filters[filters.find('FOURSTAR/Hlong'):filters.find('FOURSTAR/Hshort')-3]) filter5.close() #extracting the data for each filter row, lamj1_raw, tranj1_raw = np.loadtxt('filter1.txt', skiprows=1, unpack=True) row, lamj2_raw, tranj2_raw = np.loadtxt('filter2.txt', skiprows=1, unpack=True) row, lamj3_raw, tranj3_raw = np.loadtxt('filter3.txt', skiprows=1, unpack=True) row, lamhS_raw, tranhS_raw = np.loadtxt('filter4.txt', skiprows=1, unpack=True) row, lamhL_raw, tranhL_raw = np.loadtxt('filter5.txt', skiprows=1, unpack=True) #trim transmission files down to only show transmission values >50% tranj1, tranj2, tranj3, tranhS, tranhL = [], [], [], [], [] lamj1, lamj2, lamj3, lamhS, lamhL = [], [], [], [], [] upperlimit = .5 lowerlimit = 0.2 for x in tranj1_raw: if x > upperlimit: lamj1 = lamj1_raw[int(np.where(tranj1_raw == x)[0])-1:] tranj1 = tranj1_raw[int(np.where(tranj1_raw == x)[0])-1:] for y in tranj1: if y < lowerlimit: lamj1 = lamj1[:int(np.where(tranj1 == y)[0])] tranj1 = tranj1[:int(np.where(tranj1 == y)[0])] break break for x in tranj2_raw: if x > upperlimit: lamj2 = lamj2_raw[int(np.where(tranj2_raw == x)[0])-1:] tranj2 = tranj2_raw[int(np.where(tranj2_raw == x)[0])-1:] for y in tranj2: if y < lowerlimit: lamj2 = lamj2[:int(np.where(tranj2 == y)[0])] tranj2 = tranj2[:int(np.where(tranj2 == y)[0])] break break for x in tranj3_raw: if x > upperlimit: lamj3 = lamj3_raw[int(np.where(tranj3_raw == x)[0])-1:] tranj3 = tranj3_raw[int(np.where(tranj3_raw == x)[0])-1:] for y in tranj3: if y < lowerlimit and int(np.where(tranj3 == y)[0])>5: lamj3 = lamj3[:int(np.where(tranj3 == y)[0])] tranj3 = tranj3[:int(np.where(tranj3 == y)[0])] break break for x in tranhS_raw: if x > upperlimit: lamhS = lamhS_raw[int(np.where(tranhS_raw == x)[0])-1:] tranhS = tranhS_raw[int(np.where(tranhS_raw == x)[0])-1:] for y in tranhS: if y < lowerlimit: lamhS = lamhS[:int(np.where(tranhS == y)[0])] tranhS = tranhS[:int(np.where(tranhS == y)[0])] break break for x in tranhL_raw: if x > upperlimit: lamhL = lamhL_raw[int(np.where(tranhL_raw == x)[0])-1:] tranhL = tranhL_raw[int(np.where(tranhL_raw == x)[0])-1:] for y in tranhL: if y < lowerlimit: lamhL = lamhL[:int(np.where(tranhL == y)[0])] tranhL = tranhL[:int(np.where(tranhL == y)[0])] break break # + from scipy.interpolate import interp1d from scipy import constants as con #method to incorporate the lyman break into the spectrum (roughly at 1200 A to not effect the LAlpha line) def lymanbreak(z, wavelengths, transmission, sed): for x in wavelengths: if x > 1200*(1+z): transm = np.zeros(int(np.where(wavelengths == x)[0])) return np.append(transm, transmission[int(np.where(wavelengths == x)[0]):]),np.append(transm, sed[int(np.where(wavelengths == x)[0]):]), np.append(transm, wavelengths[int(np.where(wavelengths == x)[0]):]) elif int(np.where(wavelengths == x)[0]) == len(wavelengths)-1: return np.zeros(len(wavelengths)),np.zeros(len(wavelengths)),np.zeros(len(wavelengths)) nuj1 = con.c/(10500) nuj2 = con.c/(11400) nuj3 = con.c/(12800) nuhS = con.c/(15500) nuhL = con.c/(17000) # + from scipy.interpolate import interp1d from scipy import constants as con #Plot each filter at a redshift associated with J1 central wavelength #interpolate the totalSED for each redshift: set_intj1 = interp1d(lamA_j1, totalSED, kind='linear') set_intj2 = interp1d(lamA_j2, totalSED, kind='linear') set_intj3 = interp1d(lamA_j3, totalSED, kind='linear') set_inthS = interp1d(lamA_hS, totalSED, kind='linear') set_inthL = interp1d(lamA_hL, totalSED, kind='linear') #method to plot the luminosity of the galaxy as observed by filters J1,J2,J3,Hs,Hl #incorporates lyman break #CURRENTLY NOT WORKING AS OF 9/24/16 def plotLum(z,wavelengths,SEDs, transmissions,central_wavelengths): fig = plt.figure() ax = fig.add_subplot(111) seds = [] trans =[] wavelen =[] integrand1, integrand2 = [],[] ax.set_xlim(10030.0,wavelengths[len(wavelengths)-1][len(wavelengths[len(wavelengths)-1])-1]) flux = [] averageFlux = 0 for i in range(5): #incorporate the Lyman break trans,seds,wavelen = lymanbreak(z,wavelengths[i], transmissions[i],SEDs[i]) #Calculate Luminosity_nu as a function of wavelength and SED flux = (wavelen**2/con.c)*seds #integrate to find the average Luminosity for x in range(len(wavelen)): if wavelen[x] >0: integrand1.append(trans[x]*flux[x]/(con.c**2/wavelen[x])) integrand2.append(trans[x]/(con.c**2/wavelen[x])) #averageLum = (con.c/central_wavelengths[i]**2)*averageLum else: integrand1.append(0) averageLum = np.trapz(integrand1)/np.trapz(integrand2) averageLum = (con.c/central_wavelengths[i]**2)*averageLum #plot #ax.loglog(wavelen,seds, basex = 10, basey = 10) ax.plot(central_wavelengths[i], averageLum, 'ob') ax.set_title("Spectrum Observed at a redshift of %s"%(int(z))) fig.savefig("mag_rs_%s.png"%(int(z))) plt.close('all') #method to shift the SED as a function of z using interpolation def SEDS(z,SED, lam, filterwavelengths): set_intz = interp1d(lam*(1+z), SED, kind='linear') intwavelengths = [] for x in range(len(filterwavelengths)): intwavelengths.append([]) intwavelengths[x].append(set_intz(filterwavelengths[x])) return np.asarray(intwavelengths) central_wavelengths = [10500,11400,12800,15500,17000] transmissions = [tranj1,tranj2,tranj3,tranhS,tranhL] wavelengths = [lamj1,lamj2,lamj3,lamhS,lamhL] redshifts = [j1_r,j2_r,j3_r,hS_r,hL_r] #for x in redshifts: #sedsin = SEDS(x, totalSED, lamA, wavelengths) #plotLum(x,wavelengthsin,sedsin,transmissionsin, central_wavelengths) # + from scipy.integrate import quad import matplotlib.patches as patches '''converting the L_nu values into the flux_nu to be used in luminosity distance calculations''' #http://www.astro.cornell.edu/academics/courses/astro201/cosmoparms.htm ##D_L = (1+z)*D_M equation omegaM, omegaL = 0.27, 0.73 dH = 9.26e25 #units are h^(-1)*m def inverseE(z): return 1/(np.sqrt(omegaM*(1+z)**3 + omegaL)) #the D_C part of the equation def D_C(z): integral =quad(inverseE, 0, z) return dH* integral[0] #the D_L equation def dL(z): eq = (1+z)*D_C(z) return eq #equation for F_nu def Fnu(z, LUM): eq = (1+z)*(LUM/(4*np.pi*(dL(z)**2))) #LUM still needs to be figured out (eg. number or array) return eq #converting flux to magnitudes '''def gAB(z, nu, trans, LUM): #function to fill the integration part of the mag_AB's log function #top half of the integration part of the mag_AB's log function eq = np.trapz((trans*Fnu(z, LUM))/(con.h*nu)) #bottom half of the integration part of the mag_AB's log function eq2 = np.trapz(trans/(con.h*nu)) result = eq/eq2 return result''' def magAB(z, LUM): #eq = -48.6 - 2.5*np.log10(gAB(z, nu, trans, LUM)) eq = -48.6 - 2.5*np.log10(Fnu(z, LUM)) return eq def lymanbreakTotal(z, sed_obs, lam): if lam[0]>lam[len(lam)-1]: for x in lam: if x < 1200: zeroF = np.zeros(len(lam)-int(np.where(lam == x)[0]))+0.1 return np.append( sed_obs[:int(np.where(lam == x)[0])],zeroF,) return sed_obs #NOT TESTED YET - ONLY TESTED FOR DESCENDING LAM else: for x in lam: if x > 1200*(1+z): zeroF = np.zeros(int(np.where(lam == x)[0]))+0.1 return np.append(zeroF, sed_obs[int(np.where(lam == x)[0]):]) return sed_obs #Plot the theoretical magnitude of the SED overlayed with the magnitude of the #theoretical observed flux in each filter def plotTheoreticalMagAB(SED_lam, lam, trans, lamfilters, filterscw, z): fig = plt.figure() ax = fig.add_subplot(111) #Convert SED_lam to SED_nu, incorporate lyman break, convert to flux_nu, #redshift lam, and find magnitude of SED as a function of wavelength SED_lam = lymanbreakTotal(z,SED_lam, lam) SED_nu = (lam**2/con.c)*SED_lam flux_nu = Fnu(z,SED_nu) lamz = lam*(1+z) totalmag_lam = magAB(z,SED_nu) #Find integrated, average flux through filters and find corresponding magnitude filterL_nu = SEDS(z, SED_nu,lam, lamfilters) for x in range(len(lamfilters)): filterflux_nu = Fnu(z,filterL_nu[x][0]) averageFlux_nu = np.trapz(trans[x]*filterflux_nu/(con.c**2/lamfilters[x]))/np.trapz(trans[x]/(con.c**2/lamfilters[x])) #averageFlux_lam = (con.c/filterscw[x]**2)*averageFlux_nu averageMag = -48.6 - 2.5*np.log10(averageFlux_nu) ax.plot(central_wavelengths[x], averageMag, 'ob') #ax.set_xlim(0,lamfilters[len(lamfilters)-1][len(lamfilters[len(lamfilters)-1])-1]) for x in lamfilters: ax.add_patch(patches.Rectangle((x[0],-8),x[len(x)-1]-x[0],-9, alpha = .1)) #ax.plot([x[0],x[0]],[-9,-16], 'r') #ax.plot([x[len(x)-1],x[len(x)-1]],[-9,-16],'b') ax.set_xlim(0,50000) ax.set_ylim(-8,-16.5) ax.plot(lamz,totalmag_lam) #ax.plot( 1210*(1+z),-10, 'or') ax.set_title("Theoretical Magnitude and Measured Flux at Redshift %s"%(z)) fig.savefig("TheoreticalMag_z_%s.png"%(z)) plt.close('all') redshifts.append(6.6) for x in redshifts: plotTheoreticalMagAB(totalSED, lamA, transmissions, wavelengths, central_wavelengths, x) # -
12,411
/HW1/.ipynb_checkpoints/homework3-checkpoint.ipynb
a1235a77cc08f316bdd120ef0a5d27f25cab0f43
[]
no_license
moraskool/CSCI-570-Foundations-of-Artificial-Intelligence-Course
https://github.com/moraskool/CSCI-570-Foundations-of-Artificial-Intelligence-Course
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
16,944
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys from collections import defaultdict, deque from queue import PriorityQueue import fileinput import heapq import time # + class Node: def __init__( self, args): self.X = args[0] self.Y = args[1] self.Z = args[2] def __init__(self): pass def __repr__(self): return "Node([{0},{1},{2}])".format(self.x, self.y, self.z) def __str__(self): return "x: {0}, y: {1}, z: {2}".format(self.x, self.y, self.z) # - # works! def LUT(action,node): x, y, z = node result = [] di = { 1: [x + 1, y, z], 2: [x - 1, y, z], 3: [x,y + 1, z], 4: [x,y - 1, z], 5: [x,y, z + 1], 6: [x, y ,z - 1], 7: [x + 1, y + 1, z], 8: [x + 1, y - 1, z], 9: [x - 1, y + 1, z], 10: [x - 1, y - 1, z], 11: [x + 1, y,z + 1], 12: [x + 1, y , z - 1], 13: [x - 1, y, z + 1], 14: [x - 1, y ,z - 1 ], 15: [x, y + 1, z + 1], 16: [x, y + 1,z - 1], 17: [x, y - 1, z + 1], 18: [x, y - 1,z - 1], } result = di[action] return result # + # then do BFS, cost/weight is 1 def doBFS(graph, start, goal): if start == goal: return PrintStartisEnd(start,end) ## print to file here n = Node() s = tuple(start[0]) # parse start - initial state g = tuple(goal[0]) # parse end parent = dict() cost = {} # for cost cost[s] = 0 # path cost of initial state is zero frontier = [(0, s, ())] # parent node explored = {} # empty set while len(frontier) > 0: # loop do c, cur_node, parent = heapq.heappop(frontier) # choose shallowest node in frontier if (cur_node not in explored) : children = graph[cur_node] parent = parent + (cur_node,) explored[cur_node] = cur_node # add node to explored if cur_node == g: PrintBFS(parent,cost) return True for child in children: child = tuple(child) cost[child] = 1 heapq.heappush(frontier, (c + 1, child, parent)) # order by cost in heap # in case there's no solution PrintFailure() # - ## this is very similar, in fact this is BFS,but with uniform cost, so have to track the cost/weight def doUCS(graph, start, goal): if start == goal: return PrintStartisEnd(start,end) ## print to file here s = tuple(start[0]) # initial state cost = {} # for cost cost[s] = 0 # path cost of initial state is zero frontier = set() explored = {} # empty set frontier.add(tuple([s])) # keep track of explored nodes nodePath = {} while frontier: # loop do path = frontier.pop() # choose shallowest node in frontier node = path[-1] # get the last node from the path if node == tuple(goal[0]): PrintUCS(path,cost) return True if (node not in explored) : children = graph[node] for child in children: child = tuple(child) openPath = list(path) if child not in openPath: openPath.append(child) if child not in cost: cost[child] = getNodeCost(node, child) # specialized cost function frontier.add(tuple(openPath)) # add that child to the frontier explored[node] = node # add node to explored # if there's no solution PrintFailure() def getNodeCost(U, V): if ( U[0]!=V[0] and U[1]!=V[1]) or (U[0]!=V[0] and U[2]!=V[2]) or (U[1]!=V[1] and U[2]!=V[2]) : return 14 else: return 10 def getHeuristics(node, child): # APPLY EVALUATION FUNCTION HERE # need to design an approximate val of f f = abs(node[0] - child[0]) + abs(node[1] - child[1]) + abs(node[2] - child[2]) return f # Astar here ## this is very similar, in fact this is UCS ,but with an g(n) and h(n) def doAStar(graph, start, goal): if start == goal: return PrintStartisEnd(start,end) ## print to file here s = tuple(start[0]) # initial state e = tuple(goal[0]) cost = {s:0} # path cost of initial state is zero frontier = set() explored = {} # empty set frontier.add(tuple([s])) # keep track of open nodes nodePath = {} cumulative_cost = {s: 0} while frontier: # loop do path = frontier.pop() # choose shallowest node in frontier node = path[-1] # get the last node from the path if node == tuple(goal[0]): PrintUCS(path,cost) return True children = graph[node] for child in children: child = tuple(child) new_cost = cumulative_cost[tuple(node)] + getNodeCost(node, child) if (child not in cumulative_cost or new_cost < cumulative_cost[child]): cumulative_cost[child] = new_cost choose = new_cost + getHeuristics(e, child) openPath = list(path) traversal[openpath] openPath.append(child) frontier.add(tuple(openPath)) # add that child to the frontier # specialized cost function if child not in cost: cost[child] = getNodeCost(node, child) explored[node] = node # might have to change to a set /dict # if there's no solution PrintFailure() # + def PrintBFS(solution_path,cost): with open('output.txt', 'w') as f: print(len(solution_path) - 1, sep = "\n", file=f) print(len(solution_path), sep = "\n", file=f) print(' '.join(map(str,solution_path[0])), "0", file=f) for path in solution_path[1:]: print(' '.join(map(str,path)), cost[path], file= f) # + def PrintUCS(solution_path,cost): sum1 = sum(value for key, value in cost.items() if key in solution_path) with open('output.txt', 'w') as f: print(sum1, sep = "\n", file=f) print(len(solution_path), sep = "\n", file=f) print(' '.join(map(str,solution_path[0])), "0", file=f) for path in solution_path[1:]: print(' '.join(map(str,path)), cost[path], file= f) # - def PrintAStar(solution_path,cost): sum1 = sum(value for key, value in cost.items() if key in solution_path) with open('output.txt', 'w') as f: print(sum1, sep = "\n", file=f) print(len(solution_path), sep = "\n", file=f) print(' '.join(map(str,solution_path[0])), "0", file=f) for path in solution_path[1:]: print(' '.join(map(str,path)), cost[path], file= f) def PrintFailure(): with open('output.txt', 'w') as f: print('FAIL', file=f) def PrintStartisEnd(start, end): with open('output.txt', 'w') as f: print("0", sep = "\n", file=f) print("1", sep = "\n", file=f) print(' '.join(map(str,start[0])) , 0, file=f) # Map and transform the actions using a LookUpTable(LUT) def MapGrid(vertex, action): edges = [] node = Node() for A, N in zip(vertex, action): for n in N[:]: node = LUT(n,A) edges.append([A] + [node]) ## here return edges def addEdge(u,v): if node not in self.graph: self.graph[u]=[v] else: self.graph[u].append(v) # Create an Adjacency Matrix for representing graph def createGraph(edges): graph = defaultdict(list) # store the graph in a dictionary for faster access #(graph[tuple(edge[0])].append[tupple(edge[1])] for edge in edges) for edge in edges: a, b = tuple(edge[0]),edge[1] graph[a].append(b) return graph # + #(graph[tuple(edge[0])].append[tupple(edge[1])] for edge in edges) # #my_dictionary = {k: f(v) for k, v in my_dictionary.items()} #edges.append([A] + [[x, y, z]]) start_time = time.time() with open("input5.txt","r") as f: # Get algo type algoName = next(f).split() dimension = [] start = [] end = [] grids = [] vertex = [] action = [] # Get dimension, starting and ending positions dimension.append([int(k) for k in next(f).split()]) start.append([int(k) for k in next(f).split()]) end.append([int(k) for k in next(f).split()]) # Get the number of lines/grids NumGrid = next(f).split() strings = [str(integer) for integer in NumGrid] a_string = "".join(strings) NumGrid = int(a_string) #print(dimension, algoName, start, end, NumGrid) lines = f.readlines() # - for line in lines: line = [int(v) for v in line.split()] v, a = line[:3], line[3:] vertex.append(v) action.append(a) gridMap = MapGrid(vertex, action) #print(gridMap) graph = createGraph(gridMap) #print(graph) print("--- %s seconds ---" % (time.time() - start_time)) # + if algoName[0] == "BFS": doBFS(graph, start, end) elif algoName[0] == "UCS": doUCS(graph, start, end) elif algoName[0] == "A*": doAStar(graph, start, end) print("--- %s seconds ---" % (time.time() - start_time)) # -
10,780
/Tuple.ipynb
60d694095b46c8b4c574b5f81cac985571c5bec5
[]
no_license
Akshayunde/Basic_Program_python
https://github.com/Akshayunde/Basic_Program_python
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
5,340
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tuple """ Title :Tuple Opration Author :Akshay unde Date :14/02/2020 """ tuple1 = ("apple","banana","orange") print(tuple1) #access Tuple item uding index print(tuple1[2]) #access tuple last element using index print(tuple1[-1]) tuple2 = ("one","two","three","four","five","six") #retun the third to fifth itm in the tuple print(tuple2[2:5]) #range nagative index print(tuple2[-5:-1]) # + #Convert the tuple into a list to be able to change it x = ("apple","banana","orange") y = list(x) y[1]="kiwi" x= tuple(y) print(x) # - #check type of x type(x) #loop Throught a tuple for item in tuple2: print(item) #check "orange" Item in list if "orange" in tuple1: print("yes, 'orange' is in the fruits tuple") #Check lenth ogg tuple print(len(tuple2)) """ You canot add item in tuple you canot remove item in tuple """ # + #join two Tuples tuple3 = ("a", "b" , "c") tuple4 = (1, 2, 3) tuple5 = tuple3 + tuple4 print(tuple5) # - #Tuple Constuctor thistuple = tuple(("apple", "banana", "cherry")) # note the double round-brackets print(thistuple) w[0] max_temp = row[-1] f.close() print('기상 관측 이래 서울의 최고 기온기 가장 높았던 날은',max_date+'로, ',max_temp,'도 였습니다.')
1,462
/Bandits in a random field/gaussian_random_field.ipynb
c6bf5a528788bb7edc3bb65e6e1dc99957c5972b
[]
no_license
marekbojko/Summer-Project
https://github.com/marekbojko/Summer-Project
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
73,356
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python [conda env:Anaconda3] # language: python # name: conda-env-Anaconda3-py # --- # + # Main dependencies import numpy import scipy.fftpack def fftind(size): """ Returns a numpy array of shifted Fourier coordinates k_x k_y. Input args: size (integer): The size of the coordinate array to create Returns: k_ind, numpy array of shape (2, size, size) with: k_ind[0,:,:]: k_x components k_ind[1,:,:]: k_y components Example: print(fftind(5)) [[[ 0 1 -3 -2 -1] [ 0 1 -3 -2 -1] [ 0 1 -3 -2 -1] [ 0 1 -3 -2 -1] [ 0 1 -3 -2 -1]] [[ 0 0 0 0 0] [ 1 1 1 1 1] [-3 -3 -3 -3 -3] [-2 -2 -2 -2 -2] [-1 -1 -1 -1 -1]]] """ k_ind = numpy.mgrid[:size, :size] - int( (size + 1)/2 ) k_ind = scipy.fftpack.fftshift(k_ind) return( k_ind ) def gaussian_random_field(alpha = 3.0, size = 128, flag_normalize = True): """ Returns a numpy array of shifted Fourier coordinates k_x k_y. Input args: alpha (double, default = 3.0): The power of the power-law momentum distribution size (integer, default = 128): The size of the square output Gaussian Random Fields flag_normalize (boolean, default = True): Normalizes the Gaussian Field: - to have an average of 0.0 - to have a standard deviation of 1.0 Returns: gfield (numpy array of shape (size, size)): The random gaussian random field Example: import matplotlib import matplotlib.pyplot as plt example = gaussian_random_field() plt.imshow(example) """ # Defines momentum indices k_idx = fftind(size) # Defines the amplitude as a power law 1/|k|^(alpha/2) amplitude = numpy.power( k_idx[0]**2 + k_idx[1]**2 + 1e-10, -alpha/4.0 ) amplitude[0,0] = 0 # Draws a complex gaussian random noise with normal # (circular) distribution noise = numpy.random.normal(size = (size, size)) \ + 1j * numpy.random.normal(size = (size, size)) # To real space gfield = numpy.fft.ifft2(noise * amplitude).real # Sets the standard deviation to one if flag_normalize: gfield = gfield - numpy.mean(gfield) gfield = gfield/numpy.std(gfield) return gfield def main(): import matplotlib import matplotlib.pyplot as plt example = gaussian_random_field() plt.imshow(example, cmap='Purples') plt.show() if __name__ == '__main__': main() # + import matplotlib import matplotlib.pyplot as plt example = gaussian_random_field(size = 10, flag_normalize = False) plt.imshow(example, cmap='Purples') plt.show() # + priors = np.array([i for i in range(100)]) print (priors) print (priors%10) print (priors//10) plt.scatter(priors%10+1,priors//10+1) plt.xlim(0,11) plt.ylim(0,11) plt.show() gb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICBjb25zdCBzdGVwcyA9IG91dHB1dEVsZW1lbnQuc3RlcHM7CgogIGNvbnN0IG5leHQgPSBzdGVwcy5uZXh0KG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSk7CiAgcmV0dXJuIFByb21pc2UucmVzb2x2ZShuZXh0LnZhbHVlLnByb21pc2UpLnRoZW4oKHZhbHVlKSA9PiB7CiAgICAvLyBDYWNoZSB0aGUgbGFzdCBwcm9taXNlIHZhbHVlIHRvIG1ha2UgaXQgYXZhaWxhYmxlIHRvIHRoZSBuZXh0CiAgICAvLyBzdGVwIG9mIHRoZSBnZW5lcmF0b3IuCiAgICBvdXRwdXRFbGVtZW50Lmxhc3RQcm9taXNlVmFsdWUgPSB2YWx1ZTsKICAgIHJldHVybiBuZXh0LnZhbHVlLnJlc3BvbnNlOwogIH0pOwp9CgovKioKICogR2VuZXJhdG9yIGZ1bmN0aW9uIHdoaWNoIGlzIGNhbGxlZCBiZXR3ZWVuIGVhY2ggYXN5bmMgc3RlcCBvZiB0aGUgdXBsb2FkCiAqIHByb2Nlc3MuCiAqIEBwYXJhbSB7c3RyaW5nfSBpbnB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIGlucHV0IGZpbGUgcGlja2VyIGVsZW1lbnQuCiAqIEBwYXJhbSB7c3RyaW5nfSBvdXRwdXRJZCBFbGVtZW50IElEIG9mIHRoZSBvdXRwdXQgZGlzcGxheS4KICogQHJldHVybiB7IUl0ZXJhYmxlPCFPYmplY3Q+fSBJdGVyYWJsZSBvZiBuZXh0IHN0ZXBzLgogKi8KZnVuY3Rpb24qIHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IGlucHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKGlucHV0SWQpOwogIGlucHV0RWxlbWVudC5kaXNhYmxlZCA9IGZhbHNlOwoKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIG91dHB1dEVsZW1lbnQuaW5uZXJIVE1MID0gJyc7CgogIGNvbnN0IHBpY2tlZFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgaW5wdXRFbGVtZW50LmFkZEV2ZW50TGlzdGVuZXIoJ2NoYW5nZScsIChlKSA9PiB7CiAgICAgIHJlc29sdmUoZS50YXJnZXQuZmlsZXMpOwogICAgfSk7CiAgfSk7CgogIGNvbnN0IGNhbmNlbCA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2J1dHRvbicpOwogIGlucHV0RWxlbWVudC5wYXJlbnRFbGVtZW50LmFwcGVuZENoaWxkKGNhbmNlbCk7CiAgY2FuY2VsLnRleHRDb250ZW50ID0gJ0NhbmNlbCB1cGxvYWQnOwogIGNvbnN0IGNhbmNlbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCkgewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwogICAgICBwZXJjZW50LnRleHRDb250ZW50ID0KICAgICAgICAgIGAke01hdGgucm91bmQoKHBvc2l0aW9uIC8gZmlsZURhdGEuYnl0ZUxlbmd0aCkgKiAxMDApfSUgZG9uZWA7CiAgICB9CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 109} from google.colab import files files.upload() # + id="aO3iQ7PZOqeJ" outputId="bd980754-60b2-42c7-87d5-a5aa22dba7f3" colab={"base_uri": "https://localhost:8080/", "height": 204} AABA_data=pd.read_csv('AABA_2006-01-01_to_2018-01-01.csv') AABA_data.dropna() AABA_data['month']=pd.DatetimeIndex(AABA_data['Date']).month AABA_data['year']=pd.DatetimeIndex(AABA_data['Date']).year AABA_data['H_L_Diff']=AABA_data['High']-AABA_data['Low'] AABA_data.head() # + id="MRD3rp_fYCuA" outputId="54ee8c1d-031d-4aae-a926-34d1d5411278" colab={"base_uri": "https://localhost:8080/", "height": 296} figsize=(15,15) sns.scatterplot(x='Date', y='Open', hue='Volume', data=AABA_data) #sns.scatterplot(x='Date', y='High', hue='Volume', data=AABA_data) plt.title('AABA Stock Open Information') plt.show() #style.use('fivethirtyeight') # + id="5LpxENFlOfWy" outputId="616873dc-638a-4815-e8ac-6a766fa88ccd" colab={"base_uri": "https://localhost:8080/", "height": 17} AABA_chart=alt.Chart(AABA_data,title='AABA Stock Price').mark_point().encode( x='year:N', y='mean(Open):Q', color='H_L_Diff:Q', tooltip='High:Q' ).interactive() AABA_chart.save('AABA_chart.html') from google.colab import files files.download('AABA_chart.html') # + id="W_30V_mmoPtL" outputId="c3a083c7-5b3b-4bfa-ba9e-ca49a4e25075" colab={"base_uri": "https://localhost:8080/", "height": 398} AABA_chart # + id="r0Og2OfS2JdD" outputId="70da70e2-b454-4370-8461-eb556a70d7a5" colab={"base_uri": "https://localhost:8080/", "height": 558} AABA_data_open=AABA_data[['Date','Open','High','Low']] plt.style.use('fivethirtyeight') ax = AABA_data_open.set_index('Date').plot(figsize=(12, 8)) ax.set_ylabel('Stock Price') ax.set_xlabel('Date') ax.set_title('AABA Stock Information') plt.show() # + id="k-9mwYBH2Oan" # + id="-ZGut11K2pLu" outputId="b2850164-3699-4d69-eb8a-3862449bc7f5" colab={"base_uri": "https://localhost:8080/"} # + id="FqFaZxth2tUp" outputId="53e440d6-7f7a-415d-aa9f-2b7fc1eb5e41" colab={"base_uri": "https://localhost:8080/"} AABA_data.rename(columns={"Date": "ds", "Open": 'y'}, inplace=True) AABA_data.tail() my_model = Prophet(interval_width=.95) my_model.fit(AABA_data) # + id="CIs-H_2L3Iym" outputId="994cf332-8613-409a-eb9c-fdb9e6a578aa" colab={"base_uri": "https://localhost:8080/", "height": 204} #future_dates = my_model.make_future_dataframe(periods=36, freq='MS') #future_dates.tail() future_dates=my_model.make_future_dataframe(periods=365, freq='D') future_dates.tail() # + id="8zLMY86y3c9S" outputId="6bc43182-ac8f-44df-b9f4-340800022f2b" colab={"base_uri": "https://localhost:8080/", "height": 309} forecast = my_model.predict(future_dates) forecast.head() # + id="Y_y7ZfoMmW2t" outputId="a9c6e810-fc04-49d3-dc15-d8ab18470ae3" colab={"base_uri": "https://localhost:8080/", "height": 309} forecast.rename(columns={"ds": "Date", "yhat": 'Open'}, inplace=True) forecast.tail() # + id="cb-C_8zEmhv8" outputId="c5627205-2a72-4924-b33d-ce7c0fcfcc4a" colab={"base_uri": "https://localhost:8080/", "height": 499} forecast_modify=forecast[['Date','Open','yhat_upper','yhat_lower']] plt.style.use('fivethirtyeight') ax = forecast_modify.set_index('Date').plot(figsize=(12, 8)) ax.set_ylabel('Stock Price') ax.set_xlabel('Date') ax.set_title('AABA Stock Prediction Information in 2018') plt.show() # + id="bDVpPySWj7ab" outputId="0ccefe3c-01da-43b6-dec5-8c5210643127" colab={"base_uri": "https://localhost:8080/", "height": 492} alt.Chart(forecast, title='AABA Stck prediction for 2018').mark_point().encode( x='Date:N', y='Open:Q' ).interactive() # + id="nqQUS41ymhIr" outputId="3646b5b1-6745-4b59-ea66-c0ef1048f87f" colab={"base_uri": "https://localhost:8080/", "height": 609} forecast # + id="3nAkZ_ac3tC8" outputId="073400c0-c62c-4bf5-a0e9-f78832b4ad30" colab={"base_uri": "https://localhost:8080/", "height": 949} my_model.plot(forecast) plt.title('AABA Stck prediction for 1018') # + id="V8-VxRZ-36Ry" outputId="d1475984-218e-40ba-e79c-1aa82bd6d76d" colab={"base_uri": "https://localhost:8080/", "height": 1000} my_model.plot_components(forecast) # + id="S18F3HUL4OZD" outputId="1fcdd4b7-4824-4a25-986f-e52581db4c22" colab={"base_uri": "https://localhost:8080/", "height": 949} from fbprophet.plot import add_changepoints_to_plot fig_air = my_model.plot(forecast) a = add_changepoints_to_plot(fig_air.gca(), my_model, forecast) # + id="SJWQXihE40BV" my_model_mult = Prophet(interval_width=.95, seasonality_mode='multiplicative') # + id="NXtzIAsm58NZ" outputId="4515ac40-e4dd-4fcd-e629-7589d75c4beb" colab={"base_uri": "https://localhost:8080/"} my_model_mult.fit(AABA_data) # + id="BhJehA_r6AhW" forecast = my_model_mult.predict(future_dates) # + id="cPF5QBRI6HfA" outputId="b067c65d-9aed-4434-b4d4-a6881dd1e629" colab={"base_uri": "https://localhost:8080/", "height": 849} my_model_mult.plot(forecast) # + id="2NzoMU2M6M4N"
13,056
/Web Scraping of ACC team listing .ipynb
941ecaee1fc10f14817131ae1d5eb44b0a69a348
[]
no_license
jcharbour/Data-projects-
https://github.com/jcharbour/Data-projects-
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
27,628
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python [conda env:featureautomation] # language: python # name: conda-env-featureautomation-py # --- # ## Load data # # This dataset contains transactions of an online retail store in the UK. The files takes a bit to load. I downloaded this file as an xls from the [UCI machine learning dataset library](https://archive.ics.uci.edu/ml/datasets/online+retail#). I used excel to save it as a csv. # # I will largely follow ths process described in a [Feature Tools tutorial](https://github.com/Featuretools/Automated-Manual-Comparison/blob/master/Retail%20Spending/notebooks/Automated%20Retail%20Spending.ipynb). Note that the tutorial changed the column names from the original file (which I have not done). # + import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline create_data = False # - # Field info (from UCI machine learning dataset website): # * InvoiceNo: Invoice number. Nominal, a 6-digit integral number uniquely assigned to each transaction. If this code starts with letter 'c', it indicates a cancellation. # * StockCode: Product (item) code. Nominal, a 5-digit integral number uniquely assigned to each distinct product. # * Description: Product (item) name. Nominal. # * Quantity: The quantities of each product (item) per transaction. Numeric. # * InvoiceDate: Invice Date and time. Numeric, the day and time when each transaction was generated. # * UnitPrice: Unit price. Numeric, Product price per unit in sterling. # * CustomerID: Customer number. Nominal, a 5-digit integral number uniquely assigned to each customer. # * Country: Country name. Nominal, the name of the country where each customer resides. df = pd.read_csv('./data/raw/OnlineRetail.csv', parse_dates=["InvoiceDate"]) # Restrict data to 2011 - from tutorial. not sure why they did this. df = df[df['InvoiceDate'].dt.year == 2011] df.columns len(df) # label cancellation orders df['Cancelled'] = df['InvoiceNo'].str.startswith('C') # drop the duplicates df = df.drop_duplicates() # drop rows with null customer id df = df.dropna(axis=0) # Convert to dollars and create a field representing the total spent df['UnitPrice'] = df['UnitPrice'] * 1.65 df['Total'] = df['UnitPrice'] * df['Quantity'] df.describe() df.head() df.dtypes # ## Create response variable # # The following code is a little complex. I compute how much a customer spent each month (including months with no transactions). # + def monthly_spend(df): """Identify how much each customer spent each month.""" df.index = df['InvoiceDate'] monthly_sums = df.groupby('CustomerID').resample('MS')['Total'].sum() return monthly_sums.reset_index() def customer_by_month(monthly_data, df): """Create an index for each customer for each month of the data set.""" labels = monthly_data.set_index(['InvoiceDate', 'CustomerID']) midx = pd.MultiIndex.from_product( [pd.date_range('2011-01-01', '2012-01-01', freq='MS'), df['CustomerID'].unique()], names=labels.index.names) return labels.reindex(midx, fill_value=0).reset_index() def monthly_min_date(monthly_data): """Create a table which has all months since a customer's first transaction""" min_dates = (monthly_data .groupby('CustomerID')['InvoiceDate'] .min() .apply(lambda x: pd.date_range(x, '2012-01-01', freq='MS').tolist()) .reset_index() ) return pd.DataFrame([(x, i) for x, y in zip(min_dates['CustomerID'], min_dates['InvoiceDate']) for i in y], columns=['CustomerID', 'InvoiceDate']) # join the table where each customer has a record each month with the table where each customer has a # record since account creation. This way customers do not have records before their first transaction. #labels = labels.merge(relevant_months, on=['CustomerID', 'InvoiceDate'], how='inner') monthly_df = monthly_spend(df) labels = customer_by_month(monthly_df, df).merge(monthly_min_date(monthly_df), on=['CustomerID', 'InvoiceDate'], how='inner') # - # tutorial starts with march data. labels = labels[labels['InvoiceDate'] >= '2011-03-01'] labels.head() # Notice that I have more samples than the feature tools tutorial because they accidently removed all transactions on the first of the month... labels.describe() labels.loc[labels['CustomerID'] == 12347] df[(df['CustomerID'] == 12347) & (df['InvoiceDate'] >= '2011-12-01')]['Total'].sum() # ## Feature Automation # This next series of commands will load data into feature tools and explain the dataset to feature tools. import featuretools as ft # + es = ft.EntitySet(id="Online Retail Logs") # Add the entire data table as an entity es.entity_from_dataframe("purchases", # name of entity set dataframe=df, # data index="purchases_index", # name of new index time_index = 'InvoiceDate', # time associated with each row variable_types = {'Description': ft.variable_types.Text}) # specifiy variable type es['purchases'] # + es.normalize_entity(new_entity_id="products", # new entity for products. base_entity_id="purchases", # built from purchases entity index="StockCode", # Index with StockCode column from purchases entity. (unique key) additional_variables=["Description"]) # bring in this variable es['products'].df.head() # + es.normalize_entity(new_entity_id="customers", # customer entity base_entity_id="purchases", # from purchases index="CustomerID") # unique key is CustomerID es.normalize_entity(new_entity_id="orders", # order entity base_entity_id="purchases", # from purchases index="InvoiceNo", # unique key is InvoiceNo additional_variables=["Country", 'Cancelled']) # Include these variables. es # - es['customers'].df.head() es['orders'].df.head() # ## Deep Feature Synthesis # Here, we use featuretools to create new features. if create_data: feature_matrix,_ = ft.dfs(entityset=es, # entity target_entity='customers', # what we're trying to predict agg_primitives=['max', 'min', 'mode', 'mean', 'avg_time_between'], # requested aggregations trans_primitives=['day', 'month', 'hour', 'weekend'], # requested transformations cutoff_time=labels, # define time period of predictions verbose=1, # how much stdout cutoff_time_in_index=True, # specify that we've given cutoff times chunk_size=50, # how much data to give each worker n_jobs=-1, # how many threads to create max_depth=1) # how many aggregations feature_matrix = feature_matrix.reset_index() feature_matrix.to_csv('./data/processed/dfs_depth1.csv') feature_matrix.head() else: feature_matrix = pd.read_csv('./data/processed/dfs_depth1.csv') feature_matrix['time'] = pd.to_datetime(feature_matrix['time']) feature_matrix = feature_matrix.drop('Unnamed: 0', axis=1) feature_matrix.columns # look at a single output feature_matrix.iloc[0, :3] # demonstrate we understand the output df[(df['CustomerID'] == 12346) & (df['InvoiceDate'] < '2011-03-01')]['Quantity'].max() feature_matrix.shape # create categorical response variable feature_matrix['Total'] = feature_matrix['Total'].apply(lambda x: 1 if x > 500 else 0) # + import numpy as np def create_train_test(month, feature_matrix, drop_cols=['CustomerID', 'time', 'month', 'Total']): """Basic cleaning and return train/test data.""" # remove columns we know will not contribute feature_matrix = feature_matrix.drop(columns= ['MODE(purchases.InvoiceNo)', 'MODE(purchases.StockCode)']) # dummy code strings feature_matrix = pd.get_dummies(feature_matrix) # fill nans feature_matrix = feature_matrix.fillna(0) # Labels feature_matrix['month'] = feature_matrix['time'].dt.month train_labels = feature_matrix.loc[feature_matrix['month'] < month, 'Total'] test_labels = feature_matrix.loc[feature_matrix['month'] >= month, 'Total'] y_train = np.array(train_labels).reshape((-1, )) y_test = np.array(test_labels).reshape((-1, )) # Features X_train = feature_matrix[feature_matrix['time'].dt.month < month].drop(columns=drop_cols) X_test = feature_matrix[feature_matrix['time'].dt.month >= month].drop(columns=drop_cols) return (X_train, X_test, y_train, y_test) # - X_train, X_test, y_train, y_test = create_train_test(11, feature_matrix) print(np.mean(y_train)) print(np.mean(y_test)) # + from sklearn.linear_model import LogisticRegression model = LogisticRegression(random_state=0, class_weight='balanced') model.fit(X_train, y_train) # + from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score def print_performance(X, y, model): """Print model performance metrics.""" predictions = model.predict(X) probs = model.predict_proba(X)[:, 1] # Calculate metrics print('Precision: {}'.format(round(precision_score(y, predictions), 5))) print('Recall: {}'.format(round(recall_score(y, predictions), 5))) print('F1 Score: {}'.format(round(f1_score(y, predictions), 5))) print('ROC AUC: {}'.format(round(roc_auc_score(y, probs), 5))) # - print_performance(X_train, y_train, model) print_performance(X_test, y_test, model) # ## Deeper Features # Beware that this takes forever!!! if create_data: feature_matrix,_ = ft.dfs(entityset=es, target_entity='customers', # what we're trying to predict agg_primitives=['max', 'min', 'mode', 'mean', 'avg_time_between'], # requested aggs trans_primitives=['day', 'month', 'hour', 'weekend'], # requested transformations n_jobs=-1, chunk_size=50, max_depth=2, # how many aggregations to do cutoff_time=labels, cutoff_time_in_index=True, verbose=1) feature_matrix = feature_matrix.reset_index() feature_matrix.to_csv('./data/processed/dfs_depth2.csv') feature_matrix.head() else: feature_matrix = pd.read_csv('./data/processed/dfs_depth2.csv') feature_matrix['time'] = pd.to_datetime(feature_matrix['time']) feature_matrix = feature_matrix.drop('Unnamed: 0', axis=1) feature_matrix.shape feature_matrix.columns # look at a single output feature_matrix.iloc[7000, [0, 1, 19]] # demonstrate we understand the output df['month'] = df['InvoiceDate'].dt.month df[(df['CustomerID'] == 13481) & (df['InvoiceDate'] < '2011-06-01')].groupby('month')['Total'].count() # create categorical response variable feature_matrix['Total'] = feature_matrix['Total'].apply(lambda x: 1 if x > 500 else 0) X_train, X_test, y_train, y_test = create_train_test(11, feature_matrix) model = LogisticRegression(random_state=0, class_weight='balanced') model.fit(X_train, y_train) print_performance(X_train, y_train, model) print_performance(X_test, y_test, model) putId="9a97df3a-62c7-4e8e-8fef-01d4f8ff0d2f" # %%bigquery --project $PROJECT_ID SELECT invoice_and_item_number, date, store_number, item_description, bottles_sold, sale_dollars FROM `bigquery-public-data.iowa_liquor_sales.sales` LIMIT 5 # + [markdown] id="YiKvVQUAap9w" # ## [Optional] Match your dataset to template # + [markdown] id="1mVjF5WYap9x" # If you use the example data, you can skip this step. # # This tutorial assumes that you have a dump of your sales data already available in BigQuery located at [YOUR_PROJECT].[YOUR_DATASET].[YOUR_SOURCE_TABLE] # # You are free to adapt the SQL query in the next cell to a SQL statement that transforms your data according to the template. # + id="Kueioxtrap9x" # # %%bigquery --project $PROJECT_ID # CREATE OR REPLACE VIEW bqmlforecast.training_data AS ( # SELECT # date, # item_name, # total_amount_sold # FROM # `[YOUR_PROJECT].[YOUR_DATASET].[YOUR_SOURCE_TABLE]` # ); # + [markdown] id="PFq6aLFBap9z" # #### Set parameters for ARIMA # + [markdown] id="pWbaGiY-ap90" # You can adjust these parameters to specify the start/end dates of your training data: # + colab={"base_uri": "https://localhost:8080/"} id="M2cyW1P8ap90" outputId="7253e1fd-6164-438b-a30b-53c49e61d6b2" ARIMA_PARAMS = { 'TRAININGDATA_STARTDATE': '2016-01-01', 'TRAININGDATA_ENDDATE': '2017-06-01', } ARIMA_PARAMS # + [markdown] id="BXCthvMvap93" # You can train ARIMA models on multiple products using the same query. In this notebook, you will train a single ARIMA model to make forecasts on 5 products (`item_name`). # + [markdown] id="zmz_Wh7yap93" # #### Prepare the training data # + [markdown] id="T1B5NDEfVL2C" # As you may observe while preparing the training data below, there are missing dates below (i.e. days with no transactions for the product). # # Without needing to do extra pre-processing yourself, BigQuery ML will automatically handle: # - _missing values_: these are imputed using local linear interpolation # - _duplicated timestamps_: values averaged across duplicated timestamps # - _spike and dip anomalies_: detected using local z-scores # + colab={"base_uri": "https://localhost:8080/", "height": 347} id="7-bBYO-Hap93" outputId="59fd1987-a01b-4e3a-9719-456f790bedf3" # %%bigquery --params $ARIMA_PARAMS --project $PROJECT_ID CREATE OR REPLACE TABLE bqmlforecast.training_data AS ( WITH topsellingitems AS( SELECT item_description, count(item_description) cnt_transactions FROM `bigquery-public-data.iowa_liquor_sales.sales` GROUP BY item_description ORDER BY cnt_transactions DESC LIMIT 5 #Top N ) SELECT date, item_description AS item_name, SUM(bottles_sold) AS total_amount_sold FROM `bigquery-public-data.iowa_liquor_sales.sales` GROUP BY date, item_name HAVING date BETWEEN @TRAININGDATA_STARTDATE AND @TRAININGDATA_ENDDATE AND item_description IN (SELECT item_description FROM topsellingitems) ); SELECT date, item_name, total_amount_sold FROM bqmlforecast.training_data ORDER BY item_name, date LIMIT 10 # + [markdown] id="liuS8qCnap96" # #### Plot historical data # + [markdown] id="tQxRDGLAap97" # To visualize the data in Python, first save the data to a Pandas dataframe, `dfhistorical`. # + id="xwVPZP3fap97" # %%bigquery dfhistorical --project $PROJECT_ID SELECT * FROM bqmlforecast.training_data # + [markdown] id="JiRjcr-Uap99" # Plot the historical data: # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="hCvY292Xap9-" outputId="3b464686-cc5a-4ad7-f860-860ffd919f17" itemslist = list(dfhistorical.item_name.unique()) for item in itemslist: datah = dfhistorical[dfhistorical.item_name==item] plot_historical_and_forecast(input_timeseries = datah, timestamp_col_name = "date", data_col_name = "total_amount_sold", forecast_output = None, actual = None, title = item) # + [markdown] id="ft6ioX_kap-A" # #### Train the time series model # + [markdown] id="BqGEpOzvW2pl" # Since you are training the model on multiple products in a single model creation statement, you will need to specify the parameter `TIME_SERIES_ID_COL` as `item_name`. Note that if you were only forecasting a single item, then you would not need to specify `TIME_SERIES_ID_COL`. For more information, see the [BigQuery ML time series model creation documentation](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create-time-series#create_model_syntax). # # Time series modeling in BigQuery ML can also account for holiday effects. By default, holiday effects modeling is disabled. But since this data is from the United States, and the data includes a minimum one year of daily data, you can also specify an optional `HOLIDAY_REGION`. With holiday effects enabled, spike and dip anomalies that appear during holidays will no longer be treated as anomalies. A full list of the holiday regions can be found in the [HOLIDAY_REGION documentation](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create-time-series#holiday_region). # # # + colab={"base_uri": "https://localhost:8080/", "height": 31} id="CibqJ8yVL6kS" outputId="6d8e6996-557a-4fd7-d8b1-22e8f3c9b075" # %%bigquery --project $PROJECT_ID CREATE OR REPLACE MODEL bqmlforecast.arima_model OPTIONS( MODEL_TYPE='ARIMA', TIME_SERIES_TIMESTAMP_COL='date', TIME_SERIES_DATA_COL='total_amount_sold', TIME_SERIES_ID_COL='item_name', HOLIDAY_REGION='US' ) AS SELECT date, item_name, total_amount_sold FROM bqmlforecast.training_data # + [markdown] id="BUkZYuZH-rn2" # ### Evaluate the model # + [markdown] id="7vEV-65m-0QJ" # You can use the `ML.EVALUATE` function to see the evaluation metrics of all the created models. # # The following four columns (`non_seasonal_`{`p`,`d`,`q`} and `has_drift`) define an ARIMA model. The three metrics after that (`log_likelihood`, `AIC`, and `variance`) are relevant to the ARIMA model fitting process. The fitting process determines the best ARIMA model by using the auto.ARIMA algorithm, one for each time series. # + colab={"base_uri": "https://localhost:8080/", "height": 214} id="NAlyHOOjBxRV" outputId="c40ebf67-a621-46b0-c9db-e14e907b1cd5" # %%bigquery --project $PROJECT_ID SELECT * FROM ML.EVALUATE(MODEL bqmlforecast.arima_model) # + [markdown] id="ZXY99nSmCnsB" # As you can see, there were five models trained, one for each of the products in _item_name_. Each model has its own p,d,q hyperparameters for ARIMA, and the detected seasonality for these five models was _WEEKLY_. # + [markdown] id="0mDSGlrDap-C" # ### Make predictions using the model # + [markdown] id="QZrj49Nwap-D" # Make predictions using `ML.FORECAST` ([syntax documentation](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-forecast)), which forecasts the next n values, as set in `horizon`. You can also change the `confidence_level`, the percentage that the forecasted values fall within the prediction interval. # + id="tTwgjaITYOx8" # %%bigquery dfforecast --project $PROJECT_ID DECLARE HORIZON STRING DEFAULT "30"; #number of values to forecast DECLARE CONFIDENCE_LEVEL STRING DEFAULT "0.90"; EXECUTE IMMEDIATE format(""" SELECT * FROM ML.FORECAST(MODEL bqmlforecast.arima_model, STRUCT(%s AS horizon, %s AS confidence_level) ) """,HORIZON,CONFIDENCE_LEVEL) # + colab={"base_uri": "https://localhost:8080/", "height": 301} id="YaHX0NWXap-G" outputId="5bd2bc73-4362-458d-c191-17b07c2dc912" dfforecast.head() # + [markdown] id="N11NZ1d3ap-I" # Since `horizon` is set to 30, the result is 30 x (number of items), with one row per forecasted value: # + colab={"base_uri": "https://localhost:8080/"} id="J5nKk-XQap-I" outputId="2246f362-c0a7-4bb8-b432-88e066276f7c" print(f"Number of rows: {dfforecast.shape[0]}") # - # #### Inspect the ARIMA model coefficients # + [markdown] id="3QLmtPK19VLE" # You can view the coefficients of each of the ARIMA models using `ML.ARIMA_COEFFICIENTS` ([documentation](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-arima-coefficients)). # # For each of the models, ar_coefficients shows the model coefficients of the autoregressive (AR) part of the ARIMA model. Similarly, ma_coefficients shows the model coefficients of moving-average (MA) part. They are both arrays, whose lengths are equal to non_seasonal_p and non_seasonal_q, respectively. The intercept_or_drift is the constant term in the ARIMA model. # + colab={"base_uri": "https://localhost:8080/", "height": 197} id="fzxhgxY6-EHs" outputId="389a2380-c920-447a-f1bd-447b0592a36e" # %%bigquery --project $PROJECT_ID SELECT * FROM ML.ARIMA_COEFFICIENTS(MODEL bqmlforecast.arima_model) # + [markdown] id="MPdfnofgap-K" # #### Plot the forecasted predictions # + [markdown] id="zK9AeZnFap-L" # Plot the forecasted predictions: # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="PBKzdLi-ap-L" outputId="08b2165f-7f12-4144-87aa-c4c0363a020f" itemslist = list(dfhistorical.item_name.unique()) for item in itemslist: datah = dfhistorical[dfhistorical.item_name==item] dataf = dfforecast[dfforecast.item_name==item] plot_historical_and_forecast(input_timeseries = datah, timestamp_col_name = "date", data_col_name = "total_amount_sold", forecast_output = dataf, actual = None, title = item, plotstartdate = "2017-01-01") # + [markdown] id="_UEsngQXap-N" # #### Plot the forecasted predictions against the actual data # + [markdown] id="jNPEn3ROap-O" # To visualize the data in Python, first save the data to a Pandas dataframe. # + id="t-K52as0ap-O" # %%bigquery dfactual --params $ARIMA_PARAMS --project $PROJECT_ID DECLARE HORIZON STRING DEFAULT "30"; #number of values to forecast SELECT date, item_description AS item_name, SUM(bottles_sold) AS total_amount_sold FROM `bigquery-public-data.iowa_liquor_sales.sales` GROUP BY date, item_name HAVING date BETWEEN DATE_ADD(@TRAININGDATA_ENDDATE, INTERVAL 1 DAY) AND DATE_ADD(@TRAININGDATA_ENDDATE, INTERVAL 1+CAST(HORIZON AS INT64) DAY) ORDER BY date; # + colab={"base_uri": "https://localhost:8080/", "height": 197} id="8dZ8pRewap-Q" outputId="79967295-00f2-431d-e482-feaf517441c3" dfactual.head() # + [markdown] id="W8gcqtwiap-S" # Plot the forecasted predictions against the actual values: # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="5C9tXEvwap-S" outputId="bec48383-c501-4a12-ed34-fd9b12093bd8" itemslist = list(dfhistorical.item_name.unique()) for item in itemslist: datah = dfhistorical[dfhistorical.item_name==item].sort_values('date') dataf = dfforecast[dfforecast.item_name==item].sort_values(['forecast_timestamp']) dataa = dfactual[dfactual.item_name==item].sort_values('date') plot_historical_and_forecast(input_timeseries = datah, timestamp_col_name = "date", data_col_name = "total_amount_sold", forecast_output = dataf, actual = dataa, title = item, plotstartdate = "2017-01-01") # + [markdown] id="M_rLvjgIap-U" # ## What to do with the forecasted results # + [markdown] id="9b-KFF8wap-U" # ### Create a dashboard with Data Studio # + [markdown] id="XuRcoFQdXbrb" # Follow the steps below to create an interactive, shareable dashboard of the forecasted data using Data Studio. # + [markdown] id="YuiXk7b5ap-V" # (1) Create a view that concatenates the historical time series and the forecasted time series as shown in the following example query. # + colab={"base_uri": "https://localhost:8080/", "height": 347} id="2C8Wbu--kvf0" outputId="92ed97b5-b5ce-4f61-a1e6-8eed6b78cf34" # %%bigquery --params $ARIMA_PARAMS --project $PROJECT_ID CREATE OR REPLACE VIEW bqmlforecast.outputdata_datastudio AS ( SELECT date AS timestamp, item_name, total_amount_sold AS history_value, NULL AS forecast_value, NULL AS prediction_interval_lower_bound, NULL AS prediction_interval_upper_bound FROM bqmlforecast.training_data UNION ALL SELECT EXTRACT(DATE FROM forecast_timestamp) AS timestamp, item_name, NULL AS history_value, forecast_value, prediction_interval_lower_bound, prediction_interval_upper_bound FROM ML.FORECAST(MODEL bqmlforecast.arima_model, STRUCT(30 AS horizon, 0.9 AS confidence_level)) ORDER BY timestamp ) # - # The SQL before the `UNION ALL` clause forms the history time series. The SQL after the `UNION ALL` clause uses `ML.FORECAST` to generate the forecasted time series as well as the prediction interval. This example uses different fields for `history_value` and `forecasted_value` to plot them in different colors. # (2) Using the [BigQuery UI](https://console.cloud.google.com/bigquery), navigate to your view (`bqmlforecast.outputdata_datastudio`) and click **Export** to **Explore with Data Studio**. A new tab opens in the browser. # <img src="images/bq_export_datastudio.png" align="left"> # (3) In the **Chart** panel, find the **Time series chart** icon and click it, as shown in the following screenshot. # <img src="images/datastudio_charts.png" align="left"> # (4) Under the **Chart** panel, in the **Data** panel, find the **Metric** section. Add the following metrics: `history_value`, `forecast_value`, `prediction_interval_lower_bound`, and `prediction_interval_upper_bound`. Then, remove the default metric **Record Count** as shown in the following screenshot. # <img src="images/datastudio_chartsettings.png" align="left"> # (5) In the **Style** panel, scroll down to the **Missing Data** option and use **Linear Interpolation** (or **Line Breaks**) instead of **Line to Zero**. # <img src="images/datastudio_missingdata.png" align="left"> # (6) In the **Filter** panel, add `item_name`, and select a single liquor product (e.g., **Five O'clock Vodka**) to inspect the time series data for just that product. # <img src="images/datastudio_filter_item.png" align="left"> # (7) After you complete these steps, the following plot appears in the left panel. The input history time series is in blue, while the forecasted series is in green. The prediction interval is the region between the lower bound series and the upper bound series. # <img src="images/datastudio_fiveoclockvodka.png" align="left"> r to now grab specific parts of the datetime object. Lets grab just the year from the all of the cells in the `issue_d` column # + id="y4479qzj-Yj5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="70e891ea-8aed-4844-bb6e-0183d3d32af4" df['issue_d'].dt.year # + [markdown] id="coNYQvdJ-b0s" colab_type="text" # Now the month. # + id="D28keYR3-dJn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="1a63846f-fc11-4660-a279-55dcebf8faeb" df['issue_d'].dt.month # + [markdown] id="g4_O0q0t-hOX" colab_type="text" # It's just that easy! Now, instead of printing them out, lets add these year and month values as new columns on our dataframe. Again, you'll have to scroll all the way over to the right in the table to see the new columns. # + id="VJdlhqH1-mri" colab_type="code" colab={} df['issue_year'] = df['issue_d'].dt.year df['issue_month'] = df['issue_d'].dt.month # + [markdown] id="LAxig0nE-1kt" colab_type="text" # Because all of these dates come from Q4 of 2018, the `issue_d` column isn't all that interesting. Lets look at the `earliest_cr_line` column, which is also a string, but that could be converted to datetime format. # # We're going to create a new column called `days_from_earliest_credit_to_issue` # # It's a long column header, but think about how valuable this piece of information could be. This number will essentially indicate the length of a person's credit history and if that is correlated with repayment or other factors could be a valuable predictor! # + id="5S6BDZjp-e8-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 372} outputId="e85d54f3-e5b8-4846-bb58-2a045b4c8aea" df.head() # + id="X0VwXV1F_YpZ" colab_type="code" colab={} df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'], infer_datetime_format=True) # + id="kQUSwBIbqv-r" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="376614da-6616-48ed-90ac-2e0df441e410" (df['issue_d'] - df['earliest_cr_line']) # + id="n0SpdwVIrAMX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 372} outputId="58d1facc-4442-4a6d-deab-0a5da484e636" df['credit_length_days'] = (df['issue_d'] - df['earliest_cr_line']).dt.days df['credit_length_years'] = df['credit_length_days'] / 365 df.head() # + [markdown] id="ml-N769C_e_n" colab_type="text" # What we're about to do is so cool! Pandas' datetime format is so smart that we can simply use the subtraction operator `-` in order to calculate the amount of time between two dates. # # Think about everything that's going on under the hood in order to give us such straightforward syntax! Handling months of different lengths, leap years, etc. Pandas datetime objects are seriously powerful! # + id="FWu1a57V_cHx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="abae35d2-2c84-48b2-8bbf-d8ac09f1bf6f" df['credit_length_years'].describe() # + [markdown] id="FHQEQZx6_3_p" colab_type="text" # What's oldest credit history that was involved in Q4 2018? # + id="nrEqk6hG_20O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="dc2b4b76-7a06-4b80-c859-b1b4179db174" df['credit_length_days'].max() # + [markdown] id="gORWK62cAF2a" colab_type="text" # 25,171 days is ~ 68.96 years of credit history! # + [markdown] id="THRLJ-7Jn4om" colab_type="text" # ## Challenge # # Pandas' datetime format is so easy to work with that there's really no excuse for not using dates to make features on a dataframe! Get ready to practice more of this on your assignment. # + [markdown] id="01aq2uQra1iM" colab_type="text" # <img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200> # + [markdown] id="lKfUOajja4VB" colab_type="text" # # Assignment: # # - Replicate the lesson code. # # - This means that if you haven't followed along already, type out the things that we did in class. Forcing your fingers to hit each key will help you internalize the syntax of what we're doing. Make sure you understand each line of code that you're writing, google things that you don't fully understand. # - [Lambda Learning Method for DS - By Ryan Herr](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit?usp=sharing) # - Convert the `term` column from string to integer. # - Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0. # - Make `last_pymnt_d_month` and `last_pymnt_d_year` columns. # + id="WZysyhV2Z4kR" colab_type="code" colab={} ##### Begin Working Here ##### # + id="G1EZpC5iCvzd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="c61380e3-2335-4433-8024-ca6b60100237" #Convert the term column from string to integer. test = " 36 months" def term_to_integer(term): if isinstance(term, str): return int(term.strip("months").strip()) df['term'] = df['term'].apply(term_to_integer) df['term'].head(10) # + id="wHdpjV77G9CA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="bb8242fd-ce0a-466a-82a6-00dfb383d219" #Make a column named loan_status_is_great. It should contain the integer 1 if loan_status is "Current" or "Fully Paid." Else it should contain the integer 0. def loan_greatness(data): if data == 'Current' or data == 'Fully Paid': return 1 else: return 0 pass df['loan_status_is_great'] = df['loan_status'].apply(loan_greatness) df[['loan_status_is_great', 'loan_status']] # + id="ZUKoMgWcJVOz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 436} outputId="abc1873f-de33-4f9b-8382-41cdb8c93aee" #Make last_pymnt_d_month and last_pymnt_d_year columns. def get_month(date): if isinstance(date, str): return pd.to_datetime(date).month def get_year(date): if isinstance(date, str): return pd.to_datetime(date).year df['last_pymnt_d_month'] = df['last_pymnt_d'].apply(get_month) df['last_pymnt_d_year'] = df['last_pymnt_d'].apply(get_year) df[['last_pymnt_d_month', 'last_pymnt_d_year', 'last_pymnt_d']] # + [markdown] id="6yytMoP2bVW3" colab_type="text" # # Stretch Goals # # You can do more with the LendingClub or Instacart datasets. # # LendingClub options: # - There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values. # - Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. # - Take initiatve and work on your own ideas! # # Instacart options: # - Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?) # - Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.) # - Take initiative and work on your own ideas! # + id="32KMZ-jfcV-T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="205523bf-8c61-4682-d834-5e50d6653a1d" #inspect revol_util for whitespace df['revol_util'][0] # + id="Flk4hX-TebQQ" colab_type="code" colab={} #no whitespace, so lets make the function to convert the data to float def convert_revol_util_to_float(data): if isinstance(data, str): return float(data.strip('%')) # + id="wOHNsAUDe8MQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="6d8cc660-3591-4e48-93a1-e44dc71114bd" #test the function on some dummy data dummy_data = '73%' test = convert_revol_util_to_float(dummy_data) print(type(test)) test # + id="wa-Kka71fW4h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="114c3880-870a-49b2-bf27-e29b06b11374" #Apply changes to rest of revol_util column df['revol_util_cleaned'] = df['revol_util'].apply(convert_revol_util_to_float) df[['revol_util_cleaned', 'revol_util']] # + id="DUjs1wewbaXH" colab_type="code" colab={} # # !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz # + id="z3Xsjj4Cbbo2" colab_type="code" colab={} # # !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz # + id="VwZF9vL2bdtS" colab_type="code" colab={} # # %cd instacart_2017_05_01
35,643
/homework/Day_008_HW.ipynb
c5dda82f23c2e34aee735359cea23d4e8f560598
[]
no_license
rainbow199301/2nd-ML100Days
https://github.com/rainbow199301/2nd-ML100Days
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
35,726
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python [conda env:nairobi_ambulance] * # language: python # name: conda-env-nairobi_ambulance-py # --- import sys sys.path.append('../Scripts') from capstone_functions import * # + from collections import defaultdict import numpy as np from scipy.spatial import Voronoi, voronoi_plot_2d from scipy.spatial.distance import euclidean import matplotlib.pyplot as plt import seaborn as sns import contextily as ctx import imageio import os sns.set(style="white") colorblind_palette = sns.color_palette('colorblind').as_hex() # %matplotlib inline # + # source : https://blog.jsalv.com/pythonin-voronoi-and-k-means/ # functions to run through k_means algorithm and plot the voronoi def classify_points(centers, data): """Classifies points from data according to which center they're closest to. Returns points like {'g': np.array([..., ...], ...), } """ classes = defaultdict(list) for pt in data: dists = [euclidean(pt, center) for center in centers] classes[colorblind_palette[dists.index(min(dists))]].append(pt) # Concat the list of points in each class together to a np array. classes = {cl: np.array(classes[cl]) for cl in classes.keys()} return classes def find_new_medians(data_classified): '''Calculates new centers using median of points in the classified data.''' new_medians = [] for pts in data_classified.values(): new_medians.append(np.median(pts, axis=0)) return np.array(new_medians) def find_new_centers(data_classified): '''Calculates new centers using mean of points in the classified data.''' new_means = [] for pts in data_classified.values(): new_means.append(pts.mean(axis=0)) return np.array(new_means) def plot_voronoi(iteration, centroids, data_classified, save_on=False): fig, ax = plt.subplots(figsize=(15, 15)) """Plot the Voronoi diagram with our data classified.""" ax.set_title(f"Iteration {iteration}", fontsize=20) vor = Voronoi(centroids) _ = voronoi_plot_2d(vor, ax=ax, show_points=False, show_vertices=False, line_width=3) for center in data_classified.keys(): ax.scatter( data_classified[center][:, 0], data_classified[center][:, 1], c=center, s=15, alpha=0.8 ) ax.scatter(centroids[:, 0], centroids[:, 1], c="black", s=90, marker="P") ax.axis('off') ax.set_xlim(x_lims) ax.set_ylim(y_lims) ctx.add_basemap(ax, source=ctx.providers.OpenStreetMap.Mapnik) # save the chart of the iterations if save_on: plt.savefig('../Images/'+f'k_means_iter{iteration:03}.png') def create_gif(images_path='../Images', output_file=f'../Outputs/k_means.gif', fps=1): files = os.listdir(images_path) files.sort() images_path_filename = [os.path.join(images_path,file) for file in files] images = [] for img in images_path_filename: images.append(imageio.imread(img)) imageio.mimsave(output_file, images, fps=fps) # !rm ../Images/*.* print('gif created') # - # Create individual plots of k_means iterations for making a gif def visualise_k_means(method='k_means', remove_outlier=False, n_iter=15, gif=False, output_identifier='viz'): train_df = create_crash_df('../Inputs/Train.csv') if remove_outlier: train_df = outlier_removal(train_df, filter=0.005) # convert coordinate system k = 6378137 train_df['longitude'] = train_df['longitude'].apply(lambda x: x * (k * np.pi/180.0)) train_df['latitude'] = train_df['latitude'].apply(lambda x: np.log(np.tan((90 + x) * np.pi/360.0)) * k) data = train_df[['longitude','latitude']].values # set up initial centroids n_centers = 6 centroids = np.random.rand(n_centers, 2) * k / 360 centroids = centroids + (train_df.longitude.median(), train_df.latitude.median()) # Make some nice boundaries #lat_min = train_df.latitude.min() #lat_max = train_df.latitude.max() #lon_min = train_df.longitude.min() #lon_max = train_df.longitude.max() #y_lims = ((lat_min-lat_max), (lat_max+lat_max*-0.4)) #x_lims = ((lon_min-lon_min*0.001), (lon_max-lon_min*0.001)) # hard coding optimal y_lims = (-276743.5935774802, -37764.79152683686) x_lims = (4040437.699748081, 4212681.056255888) # run through iterations of k_means for i in range(n_iter): # Assign points to centroids data_classified = classify_points(centers=centroids, data=data) # Plot the Voronoi diagram with our data classified plot_voronoi(i, centroids, data_classified, save_on=gif) # Update the centroids based on points if method == 'k_means': centroids = find_new_centers(data_classified) elif method == 'k_medians': centroids = find_new_medians(data_classified) if gif: create_gif(output_file=f'../Outputs/{method}_{output_identifier}_{dt.datetime.now()}.gif') visualise_k_means(method='k_means', remove_outlier=False, n_iter=15, gif=True, output_identifier='wo_filtering')
5,249
/mission_to_mars.ipynb
27ef2b0b18d67011616e960e3c22713459a9af5e
[]
no_license
ConnorHz/MarsMissionWebScraping
https://github.com/ConnorHz/MarsMissionWebScraping
1
0
null
null
null
null
Jupyter Notebook
false
false
.py
4,182
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: 'Python 3.8.8 64-bit (''base'': conda)' # name: python3 # --- # # ***Applying Transfer learning and using the embedded vectors from Doc2Vec to train a Pytorch Feed Forward Neural Network :*** import numpy as np import pandas as pd import re from tqdm import tqdm tqdm.pandas() from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split import torch from matplotlib import pyplot as plt import seaborn as sns # Now that we got our embedded vectors from all the reviews using doc2vec and saved them in a csv file, we first start by importing the embeddings and then we will split these vectors into train set , validation set and test set and then train different NN and pick up the one that performs the best on the validation set # # ***Getting the arrays of the reviews and the target sentiments :*** data_new=pd.read_csv('./data/doc2vec_dataset.csv',sep='\t') def str_to_float(row): rev=row['embedded review'] rev=re.sub('[\n]', '', rev).strip('[]').split() return(np.array(rev,dtype=str).astype(np.float)) data_new['array']=data_new.progress_apply(str_to_float,axis=1) X=np.array(data_new['array'].to_list(),dtype='float64') Y=data_new['sentiment'].to_numpy() # # ***Splitting the data and Standardizing using X_train :*** X_train,X_val,Y_train,Y_val=train_test_split(X,Y,test_size=0.33,random_state=40) X_test,X_val,Y_test,Y_val=train_test_split(X_val,Y_val,test_size=0.5,random_state=40) std=StandardScaler().fit(X=X_train) std X_train_std,X_val_std,X_test_std=std.transform(X_train),std.transform(X_val),std.transform(X_test) print(f'mean of training data after standardizing : {X_train_std.mean():.5f}') print('-'*50) print(f'mean of testing data after standardizing : {X_test_std.mean():.5f}') # # ***Training our NN on classifying the embeddings of the reviews :*** class Feedforward(torch.nn.Module): def __init__(self, input_size, hidden_size): super(Feedforward, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.fc1=torch.nn.Linear(self.input_size,self.hidden_size) self.relu=torch.nn.ReLU() self.drop=torch.nn.Dropout(p=0.3) self.fc2 = torch.nn.Linear(self.hidden_size, 2) self.sigmoid = torch.nn.Sigmoid() def forward(self, x): fc=self.fc1(x) relu=self.relu(fc) drop=self.drop(relu) output = self.fc2(drop) output = self.sigmoid(output) return output X_train_std,Y_train=torch.FloatTensor(X_train_std).cuda(),torch.FloatTensor(Y_train).long().cuda() X_val_std,Y_val=torch.FloatTensor(X_val_std).cuda(),torch.FloatTensor(Y_val).long().cuda() X_test_std,Y_test=torch.FloatTensor(X_test_std).cuda(),torch.FloatTensor(Y_test).long().cuda() model = Feedforward(300, 500) criterion = torch.nn.CrossEntropyLoss().cuda() optimizer = torch.optim.SGD(model.parameters(), lr = 0.01) model.cuda() model.eval() Y_pred = model(X_val_std) before_train = criterion(Y_pred, Y_val) print(f'Evaluation loss before training : {before_train.item():.3f}') model.train() epoch = 10000 batch_size=1000 for epoch in tqdm(range(epoch)): loss_list=[] for i in range(0,len(X_train_std),batch_size): X_batch=X_train_std[i:i+batch_size] Y_batch=Y_train[i:i+batch_size] optimizer.zero_grad() # Forward pass y_pred = model(X_batch) # Compute Loss loss=criterion(y_pred, Y_batch) loss_list.append(loss.item()) loss.backward() optimizer.step() if epoch%3000==0: print('-'*50) print('Epoch {}: train loss: {:.3f}\n'.format(epoch, np.mean(loss_list))) # Backward pass model.eval() Y_pred = model(X_val_std) after_train = criterion(Y_pred, Y_val) print(f'Evaluation loss after is+=1 train_coord = train_coord.permute([1,0,2]) ground_tru = ground_tru.permute([1,0,2]) in_train = Variable(train_coord) targets = Variable(ground_tru) optimizer.zero_grad() #print(in_train.shape) #print(targets.shape) out = lstm.forward(in_train) out_bis = out[:,:,0:2].clone() #print(out_bis.shape) for i in range(10): if i == 0: out_bis[i, :, 0:2] = in_train[-1, :, 0:2] + out[i, :, 2:]*0.4 else: out_bis[i, :, 0:2] = out[i - 1, :, 0:2] + out[i, :, 2:]*0.4 #pdb.set_trace() loss1 = (criterion(out[:,:,0:2], targets[:,:,0:2])) loss2 = (criterion(out[:,:,2:], targets[:,:,2:])) loss3 = criterion(out_bis, targets[:,:,0:2]) #loss4 = 5*criterion(out[0,:,2:], targets[0,:,2:]) #+ 10*criterion(out_bis[-1,:,:], targets[-1,:,0:2]) #+ 10*criterion(out_bis[0,:,:], targets[0,:,0:2])) loss1.backward(retain_graph=True) loss2.backward(retain_graph=True) loss3.backward(retain_graph=True) #loss4.backward() optimizer.step() running_loss += (loss1+loss2+loss3).item() total_train_loss += (loss1+loss2+loss3).item() if steps % print_every == 0: stop = time.time() val_loss=0 for ii, (valcoord, valgt) in enumerate(valloader): valcoord = valcoord.permute([1,0,2]) valgt = valgt.permute([1,0,2]) inputs = Variable(valcoord, volatile=True) predicted = lstm.predict(inputs) #print(predicted.shape) predicted_bis = predicted[:,:,0:2].clone() #print(out_bis.shape) for i in range(10): if i == 0: predicted_bis[i, :, 0:2] = inputs[-1, :, 0:2] + predicted[i, :, 2:]*0.4 else: predicted_bis[i, :, 0:2] = predicted[i - 1, :, 0:2] + predicted[i, :, 2:]*0.4 val_loss+= (criterion(predicted[:,:,0:2],valgt[:,:,0:2]).item() + criterion(predicted[:,:,2:],valgt[:,:,2:]).item() + criterion(predicted_bis, valgt[:,:,0:2]).item()) print("Epoch: {}/{}..".format(e+1, epochs), "Validation loss: {:.4f}..".format(val_loss/ii), "Training loss: {:.4f}..".format(running_loss/print_every), "{:.4f} s/batch".format((stop - start)/print_every) ) loss_val.append(val_loss/ii) running_loss = 0 start = time.time() loss_train.append(total_train_loss/steps_bis) # + epoch = np.arange(1,epochs+1) plt.figure(figsize=(12, 7)) plt.rc('font', family='serif') plt.rc('font', size=20) plt.plot(epoch,loss_train,label='Training loss') plt.plot(epoch,loss_val,c='k',label='Validation loss') plt.xlabel('epoch') plt.ylabel('MSE error') plt.legend() # - output_test_1 = lstm.predict(inputs_test_1) output_test_1 = output_test_1.data.numpy() output_test_2 = lstm.predict(inputs_test_2) output_test_2 = output_test_2.data.numpy() output_test_3 = lstm.predict(inputs_test_3) output_test_3 = output_test_3.data.numpy() # ## Post processing step # Go back to coordinate: # We have Vx and Vy and we want x and y. # $ V = d/t$ # $ d = V*t$ # Here t = 0.4s between each point. # Start from data at index 9. Then we add d_x and d_y to the last x and y. # ##### Type 1 trajectory output_coord_1=np.zeros([11,inputs_test_1.shape[1],2]) for j in range(11): for i in range(inputs_test_1.shape[1]): if j==0: output_coord_1[j,i,0:2] = in_test_coord_1[9,i,0:2] else: output_coord_1[j,i,0:2] = output_coord_1[j-1,i,0:2]+output_test_1[j-1,i,2:]*0.4 # + dist = np.zeros(inputs_test_1.shape[1]) for i in range(inputs_test_1.shape[1]): dist[i] = np.sqrt(sum((gt_test_coord_1[10,i,:]-output_coord_1[10,i,:])**2)) final_coord_error = np.mean(dist) print('The final distance between the ground trought and the predicted coordinates is :',final_coord_error.round(3)) # + avr = np.zeros((inputs_test_1.shape[1],11)) for i in range(inputs_test_1.shape[1]): for j in range(11): avr[i,j] = np.sqrt(sum((gt_test_coord_1[j,i,:]-output_coord_1[j,i,:])**2)) average = np.mean(np.mean(avr,1)) print('The average error between the ground trought and the predicted coordinates is :',average.round(3)) # + plt.figure(figsize=(12, 7)) plt.rc('font', family='serif') plt.rc('font', size=20) ind = np.random.randint(inputs_test_1.shape[1]) plt.plot(in_test_coord_1[:,ind,0],in_test_coord_1[:,ind,1],c='b') plt.plot(gt_test_coord_1[:,ind,0],gt_test_coord_1[:,ind,1],c='k') plt.plot(output_coord_1[:,ind,0],output_coord_1[:,ind,1],c='r') plt.axis('equal') # - # ##### Type 2 trajectory output_coord_2=np.zeros([11,inputs_test_2.shape[1],2]) for j in range(11): for i in range(inputs_test_2.shape[1]): if j==0: output_coord_2[j,i,0:2] = in_test_coord_2[9,i,0:2] else: output_coord_2[j,i,0:2] = output_coord_2[j-1,i,0:2]+output_test_2[j-1,i,2:]*0.4 # + dist = np.zeros(inputs_test_2.shape[1]) for i in range(inputs_test_2.shape[1]): dist[i] = np.sqrt(sum((gt_test_coord_2[10,i,:]-output_coord_2[10,i,:])**2)) final_coord_error = np.mean(dist) print('The final distance between the ground truth and the predicted coordinates is :',final_coord_error.round(3)) # + avr = np.zeros((inputs_test_2.shape[1],11)) for i in range(inputs_test_2.shape[1]): for j in range(11): avr[i,j] = np.sqrt(sum((gt_test_coord_2[j,i,:]-output_coord_2[j,i,:])**2)) average = np.mean(np.mean(avr,1)) print('The average error between the ground truth and the predicted coordinates is :',average.round(3)) # + plt.figure(figsize=(12, 7)) plt.rc('font', family='serif') plt.rc('font', size=20) ind = np.random.randint(inputs_test_2.shape[1]) plt.plot(in_test_coord_2[:,ind,0],in_test_coord_2[:,ind,1],c='b') plt.plot(gt_test_coord_2[:,ind,0],gt_test_coord_2[:,ind,1],c='k') plt.plot(output_coord_2[:,ind,0],output_coord_2[:,ind,1],c='r') plt.axis('equal') # - # ##### Type 3 trajectory output_coord_3=np.zeros([11,inputs_test_3.shape[1],2]) for j in range(11): for i in range(inputs_test_3.shape[1]): if j==0: output_coord_3[j,i,0:2] = in_test_coord_3[9,i,0:2] else: output_coord_3[j,i,0:2] = output_coord_3[j-1,i,0:2]+output_test_3[j-1,i,2:]*0.4 # + dist = np.zeros(inputs_test_3.shape[1]) for i in range(inputs_test_3.shape[1]): dist[i] = np.sqrt(sum((gt_test_coord_3[10,i,:]-output_coord_3[10,i,:])**2)) final_coord_error = np.mean(dist) print('The final distance between the ground truth and the predicted coordinates is :',final_coord_error.round(3)) # + avr = np.zeros((inputs_test_3.shape[1],11)) for i in range(inputs_test_3.shape[1]): for j in range(11): avr[i,j] = np.sqrt(sum((gt_test_coord_3[j,i,:]-output_coord_3[j,i,:])**2)) average = np.mean(np.mean(avr,1)) print('The average error between the ground trought and the predicted coordinates is :',average.round(3)) # + plt.figure(figsize=(12, 7)) plt.rc('font', family='serif') plt.rc('font', size=20) ind = np.random.randint(inputs_test_3.shape[1]) plt.plot(in_test_coord_3[:,ind,0],in_test_coord_3[:,ind,1],c='b') plt.plot(gt_test_coord_3[:,ind,0],gt_test_coord_3[:,ind,1],c='k') plt.plot(output_coord_3[:,ind,0],output_coord_3[:,ind,1],c='r') plt.axis('equal') # - inputs_validation.shape
11,876
/Solutions/E04.M05/Solution-E04.M05.ipynb
3cc3baf9365feda83d447c8cdea5ac5aef8d4051
[]
no_license
SbasGM/Python-Exercises
https://github.com/SbasGM/Python-Exercises
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
4,695
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Solution: Exercise 4.M05 # Ask the user for a time duration in seconds and # convert it to an integer straight away tSeconds = int(input("Please enter a time duration in seconds: ")) # Backup the initial number of seconds tSecondsInitial = tSeconds # #### Calculate how many full days are contained in tSeconds # + # Calculate the number of seconds in a day secondsInDay = 24 * 60 * 60 # Calculate how many full days are in tSeconds tDays = tSeconds // secondsInDay # Calculate the remaining seconds after subtracting the full days tSeconds = tSeconds % secondsInDay # - # #### Calculate how many full hours are contained in the remaing tSeconds # + # Calculate the number of seconds in an hours secondsInHour = 60 * 60 # Calculate how many full days are in tSeconds tHours = tSeconds // secondsInHour # Calculate the remaining seconds after subtracting the full hours tSeconds = tSeconds % secondsInHour # - # #### Calculate how many full minutes are contained in the remaing tSeconds # + # Calculate the number of seconds in an hours secondsInMinute = 60 # Calculate how many full days are in tSeconds tMinutes = tSeconds // secondsInMinute # Calculate the remaining seconds after subtracting the full minutes tSeconds = tSeconds % secondsInMinute # - # #### Display the result print(str(tSecondsInitial) + " seconds correspond to " + str(tDays) + " days, " + str(tHours) + " hours, " + str(tMinutes) + " minutes, and " + str(tSeconds) + " seconds.") # ### Bonus: # Above print command looks very messy which tends to happen once you try and print multiple values and especially if some of them need to be converted as well. Below the same output is prioduced using the format function (see exercise 3.H02): print("{0} seconds correspond to {1} days, {2} hours, {3} minutes, and {4} seconds." .format(tSecondsInitial, tDays, tHours, tMinutes, tSeconds)) lf.fc3 = nn.Linear(z_dim, h_dim) self.decoder = nn.Sequential( UnFlatten(), nn.ConvTranspose2d(256, 128, kernel_size=4, stride=2, padding=1), nn.ReLU(), nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1), nn.ReLU(), nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1), nn.ReLU(), nn.ConvTranspose2d(32, image_channels, kernel_size=4, stride=2, padding=1), nn.Sigmoid(), ) def reparameterize(self, mu, logvar): std = torch.exp(0.5 * logvar) eps = torch.randn_like(std) return mu + std * eps def bottleneck(self, h): mu, logvar = self.fc1(h), self.fc2(h) z = self.reparameterize(mu, logvar) return z, mu, logvar def forward(self, x): h = self.encoder(x) z, mu, logvar = self.bottleneck(h) z = self.fc3(z) return self.decoder(z), mu, logvar # - # Reconstruction + KL divergence losses summed over all elements and batch def loss_function(recon_x, x, mu, logvar): cross_entropy = F.binary_cross_entropy( recon_x, x.view(-1, IMG_WIDTH * IMG_HEIGHT), reduction='sum') # see Appendix B from VAE paper: # Kingma and Welling. Auto-Encoding Variational Bayes. ICLR, 2014 # https://arxiv.org/abs/1312.6114 # 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2) kl_distance = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp()) return cross_entropy + kl_distance def generate_random(model, epoch): from torch.distributions.normal import Normal images = [] for x in range(10+1): for y in range(10+1): z = torch.randn(NUM_HIDDEN).unsqueeze(0) recon = model.decoder(model.fc3(z)) images.append(recon) images_joined = torch.cat(images).view(-1, 1, IMG_WIDTH, IMG_HEIGHT) save_image(images_joined.cpu(), 'result/vae/epoch{:03d}.png'.format(epoch), nrow=11) def train_vae(): seed = 1 batch_size = 32 epochs = 100 log_interval = 10 torch.manual_seed(seed) transform = transforms.Compose([transforms.Grayscale(), transforms.ToTensor()]) chars = datasets.folder.ImageFolder('data/', transform=transform) train_loader = torch.utils.data.DataLoader(chars, batch_size=batch_size, shuffle=True) model = VAE(image_channels=1, h_dim=IMG_HEIGHT * IMG_WIDTH, z_dim=NUM_HIDDEN) optimizer = optim.Adam(model.parameters(), lr=0.001) for epoch in range(1, epochs + 1): model.train() train_loss = 0 for batch_idx, (data, _) in enumerate(train_loader): optimizer.zero_grad() recon_batch, mu, logvar = model(data) loss = loss_function(recon_batch, data, mu, logvar) loss.backward() train_loss += loss.item() optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item() / len(data))) print('====> Epoch: {} Average loss: {:.4f}'.format( epoch, train_loss / len(train_loader.dataset))) model.eval() generate_random(model, epoch) torch.save(model.state_dict(), 'result/vae/model.pt') train_vae() Image.open('result/vae/epoch080.png')
5,694
/hw5_files/Homework5.ipynb
51ba80b83a3c6ed90f8363ded1f44b93d5495e4d
[]
no_license
mjoneil21/Homework-5
https://github.com/mjoneil21/Homework-5
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
3,178
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import re def process_youtube(USvideos , USvideos_Output): daily_counts = dict re_filter = re.compile(r'(local|remote) - - \,28,/',re.I) num_lines_matched = 0 num_lines_not_matched = 0 outputlines = list with open(USvideos) as f: for line in f: line = str.rstrip(USvideos) mfilter = re.match(rgx,line) if mfilter: id = m.group(1) daily_counts[id] = daily_counts.get(id, 0) + 1 trending_date = ??? # Increment count by trending date daily_counts[???] = ??? + 1 num_lines_matched >> outputlines else: # Increment number of lines not matched counter daily_counts['NO_ID'] = daily_counts.get('NO_ID') + 1 for item in outputlines: print>>USvideos_Output, item # + else: # Increment number of lines not matched counter monthly_counts['NO_MONTH'] = monthly_counts.get('NO_MONTH') + 1 # Write the output file # There are a few ways to write out a list of strings to a file. Figure # out one way to do it and then do it. # # Print out the counts dictionary for ??? in daily_counts.???: print (???) # All done, print the results print("\nNum lines matched --> {}".???) print(sum(num_lines_matched.values())) if __name__ == '__main__': process_youtube('./data/USvideos.csv', 'USvideos_SciTech.csv') rrow \boldsymbol{s}_{t-1} + \boldsymbol{g}_t \odot \boldsymbol{g}_t,$$ # # 其中$\odot$是按元素相乘。接着,我们将目标函数自变量中每个元素的学习率通过按元素运算重新调整一下: # # $$\boldsymbol{x}_t \leftarrow \boldsymbol{x}_{t-1} - \frac{\eta}{\sqrt{\boldsymbol{s}_t + \epsilon}} \odot \boldsymbol{g}_t,$$ # # 其中$\eta$是学习率,$\epsilon$是为了维持数值稳定性而添加的常数,如$10^{-6}$。这里开方、除法和乘法的运算都是按元素运算的。这些按元素运算使得目标函数自变量中每个元素都分别拥有自己的学习率。 # # ## 特点 # # 需要强调的是,小批量随机梯度按元素平方的累加变量$\boldsymbol{s}_t$出现在学习率的分母项中。因此,如果目标函数有关自变量中某个元素的偏导数一直都较大,那么该元素的学习率将下降较快;反之,如果目标函数有关自变量中某个元素的偏导数一直都较小,那么该元素的学习率将下降较慢。然而,由于$\boldsymbol{s}_t$一直在累加按元素平方的梯度,自变量中每个元素的学习率在迭代过程中一直在降低(或不变)。所以,当学习率在迭代早期降得较快且当前解依然不佳时,AdaGrad算法在迭代后期由于学习率过小,可能较难找到一个有用的解。 # # 下面我们仍然以目标函数$f(\boldsymbol{x})=0.1x_1^2+2x_2^2$为例观察AdaGrad算法对自变量的迭代轨迹。我们实现AdaGrad算法并使用和上一节实验中相同的学习率0.4。可以看到,自变量的迭代轨迹较平滑。但由于$\boldsymbol{s}_t$的累加效果使学习率不断衰减,自变量在迭代后期的移动幅度较小。 # + attributes={"classes": [], "id": "", "n": "2"} # %matplotlib inline import d2lzh as d2l import math from mxnet import nd def adagrad_2d(x1, x2, s1, s2): g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6 # 前两项为自变量梯度 s1 += g1 ** 2 s2 += g2 ** 2 x1 -= eta / math.sqrt(s1 + eps) * g1 x2 -= eta / math.sqrt(s2 + eps) * g2 return x1, x2, s1, s2 def f_2d(x1, x2): return 0.1 * x1 ** 2 + 2 * x2 ** 2 eta = 0.4 d2l.show_trace_2d(f_2d, d2l.train_2d(adagrad_2d)) # - # 下面将学习率增大到2。可以看到自变量更为迅速地逼近了最优解。 # + attributes={"classes": [], "id": "", "n": "3"} eta = 2 d2l.show_trace_2d(f_2d, d2l.train_2d(adagrad_2d)) # - # ## 从零开始实现 # # 同动量法一样,AdaGrad算法需要对每个自变量维护同它一样形状的状态变量。我们根据AdaGrad算法中的公式实现该算法。 # + attributes={"classes": [], "id": "", "n": "4"} features, labels = d2l.get_data_ch7() def init_adagrad_states(): s_w = nd.zeros((features.shape[1], 1)) s_b = nd.zeros(1) return (s_w, s_b) def adagrad(params, states, hyperparams): eps = 1e-6 for p, s in zip(params, states): s[:] += p.grad.square() p[:] -= hyperparams['lr'] * p.grad / (s + eps).sqrt() # - # 与[“小批量随机梯度下降”](minibatch-sgd.ipynb)一节中的实验相比,这里使用更大的学习率来训练模型。 # + attributes={"classes": [], "id": "", "n": "5"} d2l.train_ch7(adagrad, init_adagrad_states(), {'lr': 0.1}, features, labels) # - # ## 简洁实现 # # 通过名称为“adagrad”的`Trainer`实例,我们便可使用Gluon提供的AdaGrad算法来训练模型。 # + attributes={"classes": [], "id": "", "n": "6"} d2l.train_gluon_ch7('adagrad', {'learning_rate': 0.1}, features, labels) # - # ## 小结 # # * AdaGrad算法在迭代过程中不断调整学习率,并让目标函数自变量中每个元素都分别拥有自己的学习率。 # * 使用AdaGrad算法时,自变量中每个元素的学习率在迭代过程中一直在降低(或不变)。 # # ## 练习 # # * 在介绍AdaGrad算法的特点时,我们提到了它可能存在的问题。你能想到什么办法来解决这个问题? # * 在实验中尝试使用其他的初始学习率,结果有什么变化? # # ## 参考文献 # # [1] Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul), 2121-2159.
4,547
/notebooks/iurrutia/exercise-creating-reading-and-writing.ipynb
f6efa9ad5b7e83a7b6f9f3ad8c611c09d9e027d0
[]
no_license
Sayem-Mohammad-Imtiaz/kaggle-notebooks
https://github.com/Sayem-Mohammad-Imtiaz/kaggle-notebooks
5
6
null
null
null
null
Jupyter Notebook
false
false
.py
4,886
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # What: # You are wanting to remove the code input cells and input/output prompts when exporting or converting your notebook as a HTML report. This is such a common need based on so many StackOverflow questions about this. # # How: # There are a few different ways. But since nbconvert version 5.3 per the release [announcement](https://groups.google.com/forum/#!msg/jupyter/W2M_nLbboj4/CRGHUdejDwAJ), we can use nbconvert command to do this for us either via command line interface (CLI) or with config file explained below. # # CLI Options # If wanting to affect certain cells instead of ALL cells using cell metadata tags, then this is the CLI example: # # ```jupyter nbconvert Untitled.ipynb --TagRemovePreprocessor.remove_input_tags="{'hide_input'}" --no-prompt``` # # # Cells with metadata tag of "hide_input" will have their input cell removed. You can add metadata tags within your Jupyter notebook. In your jupyter notebook, go to “View” --> “Cell Toolbar” --> “Tags” and then add “hide_input” tag (can be an arbitrary string, so doesn’t have to be “hide_input”). Then save the notebook. Then execute the CLI command above. # ### There is a little "gotcha" on handling quotes as it is apparently OS-dependent. # #### Example usage for removing input cells and remove all input/output prompts having to escape the double quotes (OS agnostic method): # # ```jupyter nbconvert Untitled.ipynb --TagRemovePreprocessor.remove_input_tags={\"hide_input\"} --no-prompt``` # # # # #### Example usage for removing input cells and remove all input/output prompts without having to escape the double quotes (Windows): # # ```jupyter nbconvert Untitled.ipynb --TagRemovePreprocessor.remove_input_tags="{'hide_input'}" --no-prompt``` # # # # #### Example usage for removing input cells and remove all input/output prompts without having to escape the double quotes (‘Nix): # # ```jupyter nbconvert Untitled.ipynb --TagRemovePreprocessor.remove_input_tags=’{“hide_input”}’ --no-prompt``` # # So basically, between Windows and 'Nix OS, you have to swap the single and double quotes. # # Using --config option to use a config file # Alternatively, save options in config file and execute command like so: # # ```jupyter nbconvert Untitled.ipynb --config configA.py``` # # # # where configA.py can contain something like: # # c.TemplateExporter.exclude_input=True # # c.TemplateExporter.exclude_output_prompt=True # # # # # # Reference documentation for configuration options: # # https://nbconvert.readthedocs.io/en/latest/config_options.html on to learn about **[indexing, selecting and assigning](https://www.kaggle.com/residentmario/indexing-selecting-assigning)**. # --- # **[Pandas Home Page](https://www.kaggle.com/learn/pandas)** # # # # # # *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum) to chat with other Learners.*
3,164
/notebooks/pcayyyyyy_lmao_me388.ipynb
23cb331b4807405f615ed0a660c2bb05cec62891
[]
no_license
turingcompliant/sideprojects-hackcambridge2017
https://github.com/turingcompliant/sideprojects-hackcambridge2017
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
225,519
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Logistic Regression Example # ### 导包 # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import tensorflow as tf # - from tensorflow.examples.tutorials.mnist import input_data # + # mnist 是机构缩写,研究算法 # 数据 标准化,测试算法 # mnist 手写数字 # - mnist = input_data.read_data_sets('./') mnist images = mnist.train.images images.shape mnist.train.labels plt.imshow(images[-3].reshape(28,28,)) mnist = input_data.read_data_sets('./',one_hot= True) mnist.train.images.shape mnist.train.labels.shape # one -hot 热,独热编码 # 概率 mnist.train.labels[:10] a = np.array([-3,1,3.0]) a # + b = tf.nn.softmax(a) b # softmax将数据转化成了概率,同时概率之和是1 # 支持向量机,专一 # softmax,所有都不放过,计算概率 with tf.Session() as sess: print(sess.run(b)) # - 2**a np.e**a/(np.e**a).sum() # ### 声明graph、Softmax、最小化Cross Entroy(交叉熵) # Parameters mnist.train.images.shape mnist.train.labels.shape # + X = tf.placeholder(dtype=tf.float64,shape = [None,784]) y = tf.placeholder(dtype=tf.float64,shape = [None,10]) W = tf.Variable(initial_value=tf.random_normal(shape =[784,10],dtype = tf.float64),dtype=tf.float64) b = tf.Variable(initial_value=tf.random_normal(shape = [10],dtype = tf.float64),dtype=tf.float64) # + # matrix multipy 矩阵乘法 pred = tf.matmul(X,W) + b y_ = tf.nn.softmax(pred) # 概率 每个样本概率和等于1 # 预测,非真实分布 y_ # y真实分布 # - # 交叉熵进行计算 cost = tf.reduce_mean(tf.reduce_sum(tf.multiply(y,tf.log(1/y_)),axis = -1)) cost gd = tf.train.GradientDescentOptimizer(0.01) # 交叉熵最小 optimizer = gd.minimize(cost) # ### 初始化TensorFlow进行运算 # + X_train,y_train = mnist.train.next_batch(550) display(X_train.shape,y_train.shape) # - saver = tf.train.Saver() # + with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(100): for j in range(100): X_train,y_train = mnist.train.next_batch(550) optimizer_,cost_ = sess.run(fetches = [optimizer,cost],feed_dict = {X:X_train,y:y_train}) print('循环%d次,损失函数值是:%0.2f'%(i,cost_)) # 9,19,29,……99 if (i+1)%10 == 0: saver.save(sess,save_path='./model/estimator',global_step=i) # + with tf.Session() as sess: # sess.run(tf.global_variables_initializer()) saver.restore(sess,'./model/estimator-99') for i in range(100,200): for j in range(100): X_train,y_train = mnist.train.next_batch(550) optimizer_,cost_ = sess.run(fetches = [optimizer,cost],feed_dict = {X:X_train,y:y_train}) print('循环%d次,损失函数值是:%0.2f'%(i,cost_)) # 9,19,29,……99 if (i+1)%10 == 0: saver.save(sess,save_path='./model/estimator',global_step=i) # + with tf.Session() as sess: # sess.run(tf.global_variables_initializer()) saver.restore(sess,'./model/estimator-199') for i in range(100,200): for j in range(100): X_train,y_train = mnist.train.next_batch(550) optimizer_,cost_ = sess.run(fetches = [optimizer,cost],feed_dict = {X:X_train,y:y_train}) print('循环%d次,损失函数值是:%0.2f'%(i,cost_)) # 9,19,29,……99 if (i+1)%10 == 0: saver.save(sess,save_path='./model/estimator',global_step=i) # - # ### 计算准确率 # #### argmax cast reduce_mean # + with tf.Session() as sess: saver.restore(sess,save_path='./model/estimator-199') X_test,y_test = mnist.test.next_batch(2000) # 计算准确率 prob_ = sess.run(fetches = y_,feed_dict= {X:X_test}) # prob_类别 prob_ = np.argmax(prob_,axis = -1) y_test = np.argmax(y_test,axis = -1) print('算法预测准确率: ',(prob_ == y_test).mean()) # + with tf.Session() as sess: saver.restore(sess,save_path='./model/estimator-199') X_test,y_test = mnist.test.next_batch(2000) # 计算准确率 prob_ = sess.run(fetches = y_,feed_dict= {X:X_test}) # prob_类别 prob_ = np.argmax(prob_,axis = -1) y_test = np.argmax(y_test,axis = -1) print('算法预测准确率: ',(prob_ == y_test).mean()) ple = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(planes * block.expansion, momentum=BN_MOMENTUM), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def _get_deconv_cfg(self, deconv_kernel, index): if deconv_kernel == 4: padding = 1 output_padding = 0 elif deconv_kernel == 3: padding = 1 output_padding = 1 elif deconv_kernel == 2: padding = 0 output_padding = 0 return deconv_kernel, padding, output_padding def _make_deconv_layer(self, num_layers, num_filters, num_kernels): assert num_layers == len(num_filters), \ 'ERROR: num_deconv_layers is different len(num_deconv_filters)' assert num_layers == len(num_kernels), \ 'ERROR: num_deconv_layers is different len(num_deconv_filters)' layers = [] for i in range(num_layers): kernel, padding, output_padding = \ self._get_deconv_cfg(num_kernels[i], i) planes = num_filters[i] layers.append( nn.ConvTranspose2d( in_channels=self.inplanes, out_channels=planes, kernel_size=kernel, stride=2, padding=padding, output_padding=output_padding, bias=False)) layers.append(nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)) layers.append(nn.ReLU(inplace=True)) self.inplanes = planes return nn.Sequential(*layers) def forward(self, input): # ResNet x = self.conv1(input) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) print('Layer4 size:', x.size()) lstm_input = self.deconv_layers(x) print('Deconv_L size', lstm_input.size()) batch_size, row, col = lstm_input.size(0), lstm_input.size(2), lstm_input.size(3) mask = torch.ones(batch_size, 17, row, col).cuda() / 2. h = torch.zeros(batch_size, 32, row, col).cuda() c = torch.zeros(batch_size, 32, row, col).cuda() print('x, h, mask size: ', x.size(), h.size(), mask.size()) print() #mask_list = [] for _ in range(ITERATION): print('Beforcat:', lstm_input.size(), mask.size()) x = torch.cat((lstm_input, mask), 1) print('Cat 1:', x.size()) # LSTM x = torch.cat((x, h), 1) print('Cat 2:', x.size()) i = self.conv_i(x) f = self.conv_f(x) g = self.conv_g(x) o = self.conv_o(x) c = f * c + i * g h = o * torch.tanh(c) mask = self.conv_mask(h) print('h size:', h.size()) print('Mask size:', mask.size()) print() #mask_list.append(mask) #mask_list = torch.cat((mask_list), 1) return mask resnet_spec = {18: (BasicBlock, [2, 2, 2, 2]), 34: (BasicBlock, [3, 4, 6, 3]), 50: (Bottleneck, [3, 4, 6, 3]), 101: (Bottleneck, [3, 4, 23, 3]), 152: (Bottleneck, [3, 8, 36, 3])} # + jupyter={"outputs_hidden": false} pycharm={"is_executing": false, "name": "#%%\n"} block_class, layers = resnet_spec[50] model = testModel(block_class, layers).cuda() # + jupyter={"outputs_hidden": false} pycharm={"is_executing": false, "name": "#%%\n"} mask_list = model(input) # print(y.size()) # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} a = torch.ones(5, 1, 64, 64) b = torch.zeros(5, 1, 64, 64) c = torch.cat((a, b), 1) print(c[:,0]) # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} print(mask_list.size()) # - # ######
8,774
/Modality/CT/RSNA Pulmonary Embolism Detection.ipynb
96e8c1424c7a147f50720aba48021b87a7ac8ebe
[]
no_license
dreamsearch/SNUH-AI-Education-for-Clinicians
https://github.com/dreamsearch/SNUH-AI-Education-for-Clinicians
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
4,167,556
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # RSNA Pulmonary Embolism Detection Challenge Terms of Use and Attribution # # You may access and use these de-identified imaging datasets and annotations (“the data”) for non-commercial purposes only, including academic research and education, as long as you agree to abide by the following provisions: # 1. Not to make any attempt to identify or contact any individual(s) who may be the subjects of the data. # 2. If you share or re-distribute the data in any form, include a citation to the “RSNA-STR Pulmonary Embolism CT (RSPECT) Dataset, Copyright RSNA, 2020” and provide a link to this download site: # https://www.rsna.org/education/ai-resources-and-training/ai-image-challenge/rsna-pe-detection-challenge-2020 # # Pulmonary Embolism # ![PE](sample/PE_figure.jpg) # One of the unique rules of this competition is a special requirement regarding the label hierarchy consistency. We predict nine exam-level and one image-level label, where some of the labels are conflicting and must adhere to a specific hirearchy displayed on the image below. # ## Data fields # - `StudyInstanceUID` - unique ID for each study (exam) in the data. # - `SeriesInstanceUID` - unique ID for each series within the study. # - `SOPInstanceUID` - unique ID for each image within the study (and data). # - `pe_present_on_image` - image-level, notes whether any form of PE is present on the image. # - `negative_exam_for_pe` - exam-level, whether there are any images in the study that have PE present. # - `qa_motion` - "informational", indicates whether radiologists noted an issue with motion in the study. # - `qa_contrast` - "informational", indicates whether radiologists noted an issue with contrast in the study. # - `flow_artifact` - "informational" # - `rv_lv_ratio_gte_1` - "exam-level", indicates whether the RV/LV ratio present in the study is >= 1 # - `rv_lv_ratio_lt_1` - "exam-level", indicates whether the RV/LV ratio present in the study is < 1 # - `leftsided_pe` - "exam-level", indicates that there is PE present on the left side of the images in the study # - `chronic_pe` - "exam-level", indicates that the PE in the study is chronic # - `true_filling_defect_not_pe` - "informational", indicates a defect that is NOT PE # - `rightsided_pe` - "exam-level", indicates that there is PE present on the right side of the images in the study # - `acute_and_chronic_pe` - "exam-level", indicates that the PE present in the study is both acute AND chronic # - `central_pe` - "exam-level", indicates that there is PE present in the center of the images in the study # - `indeterminate` - "exam-level", indicates that while the study is not negative for PE, an ultimate set of exam-level labels could not be created, due to QA issues # ## Label consistency # 1. If there is at least one image per StudyInstanceUID with pe_present_on_image > 0.5, then: # - either rv_lv_ratio_lt_1 or rv_lv_ratio_gte_1 must have p > 0.5; both cannot have p > 0.5. # - at least one of central_pe, rightsided_pe and leftsided_pe must have p > 0.5; multiple having p > 0.5 is allowed. # - acute_and_chronic_pe and chronic_pe: only one of them can have p > 0.5; neither having p > 0.5 is allowed. # 2. If there are no images per StudyInstanceUID with pe_present_on_image > 0.5, then: # - either indeterminate or negative_exam_for_pe must have p > 0.5; both cannot have p > 0.5. # - all positive-related labels: rv_lv_ratio_lt_1, rv_lv_ratio_gte_1, central_pe, rightsided_pe, leftsided_pe, acute_and_chronic_pe and chronic_pe must have p < 0.5. # # Opening # - Datatype # - <a href='#label'>Label</a>: CSV file # - <a href='#Image_Folder'>Image Folder</a>: DICOM file # # - Working with dicom files # - <a href='#image'>`Image-wise steps`</a> # - Step1: Control File Meta Information # - Step2-1: Control dicom image # - Step2-2: Transforming to Hounsfield Units # - Step2-3: Image Windowing # - <a href='#slice'>`Slice-wise steps`</a> # - Step1-1: Load CT-scans per patient # - Step1-2: Visualize # - Step2: Slices Windowing # - <a href='#voxel'>`Voxel-wise steps`</a> # - Step1: The voxel size # - Step2: Slice Thickness # - Step3: Resampling the voxel size # # from IPython.display import Image import os import numpy as np import pandas as pd import scipy import seaborn as sns import pydicom # ## Datatype # <a id='label'></a> # ## Label # - `train.csv` contains the three UIDs noted above, and a number of labels. Some are targets which require predictions, and some are informational, which will also be noted below in `Data fields`. # # - `test.csv` contains only the three UIDs. # !pwd # !tree ./input/rsna-str-pulmonary-embolism-detection -L 3 basepath = "./input/rsna-str-pulmonary-embolism-detection/" os.listdir(basepath) train = pd.read_csv(basepath + "train.csv") test = pd.read_csv(basepath + "test.csv") train.shape train.tail() train["folder_path"] = basepath + "train/" + train.StudyInstanceUID + "/" + train.SeriesInstanceUID train.tail() # <a id='Image_Folder'></a> # ## Image Folder # - The location for each image is given by: `<StudyInstanceUID>/<SeriesInstanceUID>/<SOPInstanceUID>.dcm` # - Example # - `<StudyInstanceUID>` : 0a4d7c9fa082 # - `<SeriesInstanceUID>` : 38eddda9207f # - `<SOPInstanceUID>.dcm` : c7a7ad9160f5.dcm # !tree ./input/rsna-str-pulmonary-embolism-detection/train IMG_EXTENSION = ['.dcm', '.DCM'] # + def check_extension(filename): return any(filename.endswith(extension) for extension in ['.dcm', '.DCM']) def load_scans_path(folder_path): """ find 'IMG_EXTENSION' file paths in folder. return list """ img_paths = [] assert os.path.isdir(folder_path), '%s is not a valid directory' for root, _, fnames in sorted(os.walk(folder_path)): for fname in fnames: if check_extension(fname): path = os.path.join(root, fname) img_paths.append(path) return img_paths[:] # + #trainpath = "/home/Samples/3/44173799/Thick CT" trainpath = "./input/rsna-str-pulmonary-embolism-detection/train" train_img_paths = load_scans_path(trainpath) # - train_img_paths # ### Working with dicom files example_path = train_img_paths[0] example = pydicom.read_file(example_path, force=True) # reading even if no File Meta Information header is found # + jupyter={"outputs_hidden": true} print(example) # - # <a id='image'></a> # ### `Image-wise steps` # ### Step1: Control File Meta Information print("Slice Thickness: {}".format(example.SliceThickness)) #hexadecimal print("Rescale Intercept: {}".format(example[0x0028, 0x1052].value)) #hexadecimal print("Rescale Slope: {}".format(example[0x0028, 0x1053].value)) #hexadecimal # ### Step2-1: Control dicom image ex_img = example.pixel_array ex_img.shape, type(ex_img), ex_img.dtype np.min(ex_img), np.max(ex_img) import matplotlib.pyplot as plt plt.figure(figsize=(10,10)) plt.imshow(ex_img, cmap='gray') plt.axis("off") plt.title(example_path.split('/')[-1]) plt.show() # ### Step2-2: Transforming to Hounsfield Units # ![HU](sample/HU.png) # reference: https://youtu.be/4pb1f79h7_I #Intensity distribution plt.figure(figsize=(10,5)) sns.distplot(ex_img.flatten()) plt.show() # #### Rescale Intercept / Rescale Slope # : CT images, whose pixel values are measured in Hounsfield units, which can have negative values, are "commonly" stored with an `unsigned integer` # $$ # U = m*SV + b # $$ # # - $U$ : output units # - $m$ : rescale slope(0028|1053) # - $SV$ : stored value # - $b$ : rescale intercept(0028|1052) # reference: https://blog.kitware.com/dicom-rescale-intercept-rescale-slope-and-itk/ # + # Why? -> Unsigned Integer로 저장하기 때문(아닐 수도 있음) 영상이 원형인 경우 padding # - def set_outside_scanner_to_air(hu_pixelarrays): """ Pixel Padding Value Attribute(0028,0120) -> air """ hu_pixelarrays[hu_pixelarrays < -1024] = -1024 return hu_pixelarrays # + jupyter={"source_hidden": true} def my_set_outside_scanner_to_air(raw_pixelarrays): """ Pixel Padding Value Attribute(0028,0120) -> air """ padding = raw_pixelarrays.min() if padding > 0: #there is no padding return raw_pixelarrays else: air_val = np.min(raw_pixelarrays[raw_pixelarrays != padding]) raw_pixelarrays[raw_pixelarrays <= raw_pixelarrays.min()] = air_val return raw_pixelarrays # - def transform_to_hu(dicom_info, image): image = set_outside_scanner_to_air(image) intercept = dicom_info.RescaleIntercept slope = dicom_info.RescaleSlope hu_image = image.astype(np.float64) * slope + intercept hu_image = set_outside_scanner_to_air(hu_image.astype(np.int16)) return hu_image hu_ex_img = transform_to_hu(example, ex_img) hu_ex_img.dtype #Intensity distribution plt.figure(figsize=(10,5)) sns.distplot(hu_ex_img.flatten()) plt.show() # ### Step2-3: Image Windowing class CT_Windowing: """ CT image windowing : WL_Window Level, WW_Window Width mode(WL|WW) : 'abdomen'(60|400) , 'bone'(300|1500), 'brain'(40|80), 'chest'(40|400), 'lung'(-700|1500), 'custom'(WL|WW) custom_window : if mode == 'custom', set custom_window(WL, WW) -> list or tuple """ def __init__(self, mode='custom', custom_window=None, norm=False): option = ['abdomen' , 'bone', 'brain', 'chest', 'lung', 'custom'] assert mode in option, "Wrong mode: Enter \'abdomen\' , \'bone\', \'brain\', \'chest\', \'lung\', \'custom\'" self.mode = "window_" + mode if custom_window is not None: self.w_level = custom_window[0] self.w_width = custom_window[1] self.norm = norm def windowing(self): self.w_min = self.w_level - (self.w_width / 2) self.w_max = self.w_level + (self.w_width / 2) window_image = self.img.copy() window_image[window_image < self.w_min] = self.w_min window_image[window_image > self.w_max] = self.w_max if self.norm: window_image = np.uint8(((window_image - self.w_min) / (self.w_max - self.w_min)) * 255.0) return window_image def window_abdomen(self): self.w_level = 60 self.w_width = 400 return self.windowing() def window_bone(self): self.w_level = 300 self.w_width = 1500 return self.windowing() def window_brain(self): self.w_level = 40 self.w_width = 80 return self.windowing() def window_chest(self): self.w_level = 40 self.w_width = 400 return self.windowing() def window_lung(self): #SNUH version self.w_level = -700 self.w_width = 1500 #print('lung') return self.windowing() def window_custom(self): return self.windowing() def __call__(self, hu_img): self.img = hu_img self.opt = getattr(self, self.mode, lambda:'custom') return self.opt() window = CT_Windowing(mode='lung', norm=True) lungw_img = window(hu_ex_img) fig, ax = plt.subplots(1,2,figsize=(20,10)) ax[0].imshow(hu_ex_img, cmap="gray") ax[0].axis("off") ax[1].imshow(lungw_img, vmin=0, vmax=255, cmap='gray') ax[1].axis("off") # <a id='slice'></a> # ### `Slice-wise steps` # ### Step1: Load CT-scans per patient testpath = "./input/rsna-str-pulmonary-embolism-detection/test/0a5dc64cf7b8" def load_ct_scans(patient_folder_path): """ Function of Loading CT-scans per patient patient path to CT slices(HU) input: patient folder path -> list or tuple output: File Meta Information, CT sclices -> tuple """ dcm_paths = load_scans_path(patient_folder_path) slices = [pydicom.read_file(dcm_path, force=True) for dcm_path in dcm_paths] slices.sort(key = lambda x: float(x.ImagePositionPatient[2])) images = np.stack([file.pixel_array for file in slices]) images = images.astype(np.int16) #images = set_outside_scanner_to_air(images) intercept = slices[0].RescaleIntercept slope = slices[0].RescaleSlope hu_images = images.astype(np.float64) * slope + intercept return slices[0], hu_images.astype(np.int16) _, test_imgs = load_ct_scans(testpath) test_imgs.shape plt.figure(figsize=(10,10)) plt.imshow(test_imgs[30], cmap='gray') plt.axis("off") plt.show() # ### Step1-2: Visualize def sample_stack(stack, rows=6, cols=6, start_with=10, show_every=5): fig,ax = plt.subplots(rows,cols,figsize=[18,20]) for i in range(rows*cols): ind = start_with + i*show_every ax[int(i/rows),int(i % rows)].set_title(f'slice {ind}') ax[int(i/rows),int(i % rows)].imshow(stack[ind], vmin=-1000, vmax=1000,cmap='gray') ax[int(i/rows),int(i % rows)].axis('off') plt.show() sample_stack(test_imgs) import sys from skimage import measure from mpl_toolkits.mplot3d.art3d import Poly3DCollection def plot_3d(image, threshold=700, color="navy"): # Position the scan upright, # so the head of the patient would be at the top facing the camera p = image.transpose(2,1,0) verts, faces,_,_ = measure.marching_cubes_lewiner(p, threshold) fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(111, projection='3d') # Fancy indexing: `verts[faces]` to generate a collection of triangles mesh = Poly3DCollection(verts[faces], alpha=0.2) mesh.set_facecolor(color) ax.add_collection3d(mesh) ax.set_xlim(0, p.shape[0]) ax.set_ylim(0, p.shape[1]) ax.set_zlim(0, p.shape[2]) plt.show() # + jupyter={"outputs_hidden": true} plot_3d(test_imgs) # - # ### Step2: Slices Windowing window = CT_Windowing(mode='lung', norm=True) lungw_imgs = window(test_imgs) def sample_stack(stack, rows=6, cols=6, start_with=10, show_every=5): fig,ax = plt.subplots(rows,cols,figsize=[18,20]) for i in range(rows*cols): ind = start_with + i*show_every ax[int(i/rows),int(i % rows)].set_title(f'slice {ind}') ax[int(i/rows),int(i % rows)].imshow(stack[ind], vmin=0, vmax=255,cmap='gray') ax[int(i/rows),int(i % rows)].axis('off') plt.show() sample_stack(lungw_imgs) # <a id='voxel'></a> # ### `Voxel-wise steps` # ### Step1: Pixelspacing # - The pixelspacing attribute you can find in the dicom files is an important one. It tells us how much physical distance is covered by one pixel. You can see that there are only 2 values that describe the x- and y-direction in the plane of a transversal slice. # - For one patient this pixelspacing is usually the same for all slices. # - But between patients the pixelspacing can differ due to personal or institutional preferences of doctors and the clinic and it also depends on the scanner type. Consequently if you compare two images in the size of the lungs it does not automatically mean that the bigger one is really larger in the physical size of the organ! # ![pixelspacing](sample/pixelspacing.png) # reference: https://simpleitk.readthedocs.io/en/v1.2.4/Documentation/docs/source/fundamentalConcepts.html # N patients N = 3 # + trainpath = "./input/rsna-str-pulmonary-embolism-detection/train" train_img_paths = load_scans_path(trainpath) # - # sample database uid = [uid.split('.')[-2].split('/')[-3:] for uid in train_img_paths] train_df_sample = pd.DataFrame(data=uid, columns=["StudyInstanceUID","SeriesInstanceUID","SOPInstanceUID"]) train_df_sample["folder_path"] = trainpath + "/" + train_df_sample.StudyInstanceUID + "/" + train_df_sample.SeriesInstanceUID train_df_sample # + def get_window_value(feature): if type(feature) == pydicom.multival.MultiValue: return np.int(feature[0]) else: return np.int(feature) pixelspacing_r = [] pixelspacing_c = [] slice_thicknesses = [] patient_id = [] patient_pth = [] row_values = [] column_values = [] window_widths = [] window_levels = [] patients = train_df_sample.SeriesInstanceUID.unique()[0:N] for patient in patients: patient_id.append(patient) path = train_df_sample[train_df_sample.SeriesInstanceUID == patient].folder_path.values[0] example_dcm = os.listdir(path)[0] patient_pth.append(path) dataset = pydicom.dcmread(path + "/" + example_dcm) spacing = dataset.PixelSpacing slice_thicknesses.append(dataset.SliceThickness) row_values.append(dataset.Rows) column_values.append(dataset.Columns) pixelspacing_r.append(spacing[0]) pixelspacing_c.append(spacing[1]) scan_properties = pd.DataFrame(data=patient_id, columns=["patient"]) scan_properties.loc[:, "rows"] = row_values scan_properties.loc[:, "columns"] = column_values scan_properties.loc[:, "area"] = scan_properties["rows"] * scan_properties["columns"] scan_properties.loc[:, "pixelspacing_r"] = pixelspacing_r scan_properties.loc[:, "pixelspacing_c"] = pixelspacing_c scan_properties.loc[:, "pixelspacing_area"] = scan_properties.pixelspacing_r * scan_properties.pixelspacing_c scan_properties.loc[:, "slice_thickness"] = slice_thicknesses scan_properties.loc[:, "patient_pth"] = patient_pth scan_properties.head() # + jupyter={"source_hidden": true} fig, ax = plt.subplots(1,2,figsize=(20,5)) sns.distplot(pixelspacing_r, ax=ax[0], color="Limegreen", kde=False) ax[0].set_title("Pixel spacing distribution \n in row direction ") ax[0].set_ylabel("Counts in train") ax[0].set_xlabel("mm") sns.distplot(pixelspacing_c, ax=ax[1], color="Mediumseagreen", kde=False) ax[1].set_title("Pixel spacing distribution \n in column direction"); ax[1].set_ylabel("Counts in train"); ax[1].set_xlabel("mm"); # - # ### Step2: Slice thickness # + jupyter={"source_hidden": true} counts = scan_properties.groupby(["rows", "columns"]).size() counts = counts.unstack() counts.fillna(0, inplace=True) fig, ax = plt.subplots(1,2,figsize=(20,5)) sns.distplot(slice_thicknesses, color="orangered", kde=False, ax=ax[0]) ax[0].set_title("Slice thicknesses of all patients"); ax[0].set_xlabel("Slice thickness in mm") ax[0].set_ylabel("Counts in train"); for n in counts.index.values: for m in counts.columns.values: ax[1].scatter(n, m, s=counts.loc[n,m], c="midnightblue") ax[1].set_xlabel("rows") ax[1].set_ylabel("columns") ax[1].set_title("Pixel area of ct-scan per patient"); # - trainpath2 = "./input/rsna-str-pulmonary-embolism-detection/train/0a4d7c9fa082" traindatainfo, train_imgs2 = load_ct_scans(trainpath2) traindatainfo.SliceThickness testpath2 = "./input/rsna-str-pulmonary-embolism-detection/test/62dfc5f411e8" testdatainfo, test_imgs2 = load_ct_scans(testpath2) test_imgs2 testdatainfo.SliceThickness fig, ax = plt.subplots(1,2,figsize=(20,10)) ax[0].imshow(train_imgs2[0], vmin=-1000, vmax=1000, cmap="gray") ax[0].axis("off") ax[1].imshow(test_imgs2[0], vmin=-1000, vmax=1000, cmap='gray') ax[1].axis("off") # ### Step3: Resampling the voxel size def resample(image, scan, new_spacing=[1,1,1]): # Determine current pixel spacing spacing = np.array([scan.SliceThickness] + list(scan.PixelSpacing), dtype=np.float32) resize_factor = spacing / new_spacing new_shape = np.round(image.shape * resize_factor) # recompute the resize factor and spacing such that we match the rounded new shape above rounded_resize_factor = new_shape / image.shape rounded_new_spacing = spacing / rounded_resize_factor # zoom with resize factor image = scipy.ndimage.interpolation.zoom(image, rounded_resize_factor, mode='nearest') return image, rounded_new_spacing img_resampled, spacing = resample(test_imgs2, testdatainfo, [1,1,1]) print("Shape before resampling\t", test_imgs2.shape) print("Shape after resampling\t", img_resampled.shape) fig, ax = plt.subplots(1,2,figsize=(20,10)) ax[0].imshow(test_imgs2[0], vmin=-1000, vmax=1000, cmap="gray") ax[0].axis("off") ax[1].imshow(img_resampled[0], vmin=-1000, vmax=1000, cmap='gray') ax[1].axis("off")
20,366
/fantasy football ML.ipynb
32d68a35e6a087c8a34c334d97bb4d56c28d8278
[]
no_license
jchu47/Fantasy-Football
https://github.com/jchu47/Fantasy-Football
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
22,904
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 【問題1】べき乗の算術演算子を使用して作成 # べき乗の算術演算子を使用したプログラムを作ってください。 # 雛形として紙を1回折った時の厚さを計算するコードを用意しました。これを43回折った時のコードに書き換えてください。 # + """ 紙を1回折った時の厚さを計算するコード """ THICKNESS = 0.00008 folded_thickness = THICKNESS * 2**43 print("厚さ: {}メートル".format(folded_thickness)) # - # 【問題2】単位の変換 # 単位がメートルだと実感が湧きづらいので、◯◯万キロメートル に変換して表示させてください。 # # サンプルとして ◯◯キロメートル に変換したコードを用意したので、参考にして取り組んでください。小数点以下は2桁まで表示されるようにも指定しています。 print("厚さ: {:.2f}万キロメートル".format(folded_thickness/10000000)) # 月までの距離は約38万キロメートルだそうなので余裕で届いてますね # 【問題3】for文を使用して作成 # 次に、for文を使用したプログラムを作ってください。 # # べき乗の算術演算子は使ってはいけません。算術演算子は四則演算(+、-、*、/)のみ使えます。 # + cnt = 43 thickness = THICKNESS for i in range(cnt): thickness = thickness * 2 print("厚さ: {:.2f}万キロメートル".format(folded_thickness/10000000)) # - # 【問題4】計算時間の比較 # 2つの方法はどちらが正しいわけでもありませんが、コードの良さを評価する際には以下のような着目点があります。 # # 計算速度 # メモリの使用量 # 可読性 # 拡張性 # 再利用性 # 今回は計算速度を比較してみます。以下の雛形を使用して、2つの方法の計算時間を出力してください。そして、それぞれの計算時間の関係を簡単に説明してください。どちらの書き方が良さそうでしょうか。なお、変数の定義やprint文など、どちらの方法でも使われている部分は除いた範囲の時間を比較してください。 # + import time THICKNESS = 0.00008 cnt = 1000 def method_1(thickness, cnt): folded_thickness1 = thickness * 2**cnt return folded_thickness def method_2(thickness, cnt): for i in range(cnt): thickness = thickness * 2 return thickness start_1 = time.time() method_1(THICKNESS, cnt) elapsed_time1 = time.time() - start_1 start_2 = time.time() method_2(THICKNESS, cnt) elapsed_time2 = time.time() - start_2 print("time1 : {}[s]".format(elapsed_time1)) print("time2 : {}[s]".format(elapsed_time2)) # - # for文を使ったほうがかなり遅くなる? # + # %%timeit method_1(thickness, cnt) # + # %%timeit method_2(thickness, cnt) # - # 【問題5】リストへの保存 # ここまでは43回折った後の最後の値だけを使用していましたが、グラフで可視化するためには過程の値も必要です。for文を使用したコードに、過程の値合計44個を記録するコードを加えてください。 # # 《ヒント》 # # 空のリストを作成する。 # 折る前の値をリストに追加する。 # for文の中でn回折った時の値をリストに追加していく。 # 最終的にリストに44個の値が格納されていることをlen関数を用いて確認しておきましょう。 # + THICKNESS = 0.00008 cnt = 43 def method(thickness, cnt): thickness_list = [] thickness_list.append(thickness) for i in range(cnt): thickness = thickness * 2 thickness_list.append(thickness) return thickness_list thickness_list = method(thickness, cnt) print("リスト一覧:{}".format(thickness_list)) print("リストの長さ:{}".format(len(thickness_list))) # - # 【問題6】折れ線グラフの表示 # グラフの描画には Matplotlib という ライブラリ を用います。リストへ記録するコードの後ろで以下の雛形を使用してください。 # + """ グラフを表示する。タイトルと軸ラベル名付き。 """ import matplotlib.pyplot as plt # %matplotlib inline plt.title("thickness of folded paper") plt.xlabel("number of folds") plt.ylabel("thickness[m]") plt.plot(thickness_list) plt.show() # - # 【問題7】グラフのカスタマイズ # グラフをより見やすくカスタマイズしてみましょう。カスタマイズしたグラフを最低3種類作成してください。例えば以下のように書き換えることで、線の色を赤に変更できます。 plt.title("thickness of folded paper") plt.xlabel("number of folds") plt.ylabel("thickness[m]") plt.plot(thickness_list, color='red') plt.show() plt.title("thickness of folded paper") plt.xlabel("number of folds") plt.ylabel("thickness[m]") plt.plot(thickness_list, color='red', marker='o') plt.show() plt.title("thickness of folded paper") plt.xlabel("number of folds") plt.ylabel("thickness[m]") plt.plot(thickness_list, color='red', linestyle='--') plt.show() plt.title("thickness of folded paper") plt.xlabel("number of folds") plt.ylabel("thickness[m]") plt.plot(thickness_list, color='red', linestyle='--', linewidth=5) plt.show()
3,615
/Day_014_HW.ipynb
094d5558cfa47d460d1f76d76c068921d4cf83fc
[]
no_license
kevinharryshen/MachineLearningMarathon
https://github.com/kevinharryshen/MachineLearningMarathon
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
48,706
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 作業 - 參考範例程式碼,模擬一組負相關的資料,並計算出相關係數以及畫出 scatter plot # [作業目標] - 以下程式碼將示範在 python 如何利用 numpy 計算出兩組數據之間的相關係數,並觀察散佈圖 - 藉由觀察相關矩陣與散佈圖的關係, 希望同學對 負相關 的變數分布情形也有比較直覺的理解 # [作業重點] - 仿照 In[4], In[5] 的語法, 寫出負相關的變數, 並觀察相關矩陣以及分布圖 # + # 載入基礎套件 import numpy as np np.random.seed(1) import matplotlib import matplotlib.pyplot as plt # %matplotlib inline # - # ### 弱相關 # + # 隨機生成兩組 1000 個介於 0~50 的數的整數 x, y, 看看相關矩陣如何 x = np.random.randint(0, 50, 1000) y = np.random.randint(0, 50, 1000) # 呼叫 numpy 裡的相關矩陣函數 (corrcoef) np.corrcoef(x, y) # - # 將分布畫出來看看吧 plt.scatter(x, y) # ### 正相關 # + # 隨機生成 1000 個介於 0~50 的數 x x = np.random.randint(0, 50, 1000) # 這次讓 y 與 x 正相關,再增加一些雜訊 y = np.random.normal(0, 10, 1000) - x # 再次用 numpy 裡的函數來計算相關係數 np.corrcoef(x, y) # - # 再看看正相關的 x,y 分布 plt.scatter(x, y) down] id="gs5OVGPpKLji" # ## The text dataset # The text we will work with in this notebook is Three Men in a Boat by Jerome K. Jerome, a comical short story about the perils of going outside. # + [markdown] id="Mv7NrBDxKmcv" # #### Import the data # The text dataset required for this notebook can be downloaded from the following link: # # https://drive.google.com/open?id=1GWzEKtTcarb3LIfl0AK7byUZqEJcIBk- # # You should store the data in Drive for use in this Colab notebook. # + id="j6h1rikDK45E" # Run this cell to connect to your Drive folder from google.colab import drive drive.mount('/content/gdrive') # + id="sTFpUsFaKLjk" # Load the data with open('/content/gdrive/MyDrive/ThreeMenInABoat.txt', 'r', encoding='utf-8') as file: text_string = file.read().replace('\n', ' ') # + id="PwoiLWBbKLjo" # Perform some simple preprocessing, replacing dashes with empty spaces text_string = text_string.replace('—', '') # + id="u7LmnVu1KLjt" # View an excerpt of the data text_string[0:2001] # + id="-kFFz4l4KLjx" # Split the text into sentences. sentence_strings = text_string.split('.') # + id="oxdUUpzwKLj1" # View a sample of the dataset sentence_strings[20:30] # + [markdown] id="g_4HYbwgKLj5" # ## Create a Tokenizer object # + [markdown] id="oGjVbNZBKLj6" # The `Tokenizer` object allows you to easily tokenise words or characters from a text document. It has several options to allow you to adjust the tokenisation process. Documentation is available for the `Tokenizer` [here](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer). # + id="eFtFOWZFKLj7" # Define any additional characters that we want to filter out (ignore) from the text additional_filters = '—’‘“”' # + [markdown] id="AxwvvZFmKLj-" # The Tokenizer has a `filters` keyword argument, that determines which characters will be filtered out from the text. The cell below shows the default characters that are filtered, to which we are adding our additional filters. # + id="VKK3w-9qKLj_" # Create a Tokenizer object from tensorflow.keras.preprocessing.text import Tokenizer tokenizer = Tokenizer(num_words=None, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n' + additional_filters, lower=True, split=' ', char_level=False, oov_token='<UNK>', document_count=0) # + [markdown] id="_wYV3SxgKLkD" # In all, the `Tokenizer` has the following keyword arguments: # # `num_words`: int. the maximum number of words to keep, based on word frequency. Only the most common `num_words-1` words will be kept. If set to `None`, all words are kept. # # `filters`: str. Each element is a character that will be filtered from the texts. Defaults to all punctuation (inc. tabs and line breaks), except `'`. # # `lower`: bool. Whether to convert the texts to lowercase. Defaults to `True`. # # `split`: str. Separator for word splitting. Defaults to `' '`. # # `char_level`: bool. if True, every character will be treated as a token. Defaults to `False`. # # `oov_token`: if given, it will be added to word_index and used to replace out-of-vocabulary words during sequence_to_text calls. Defaults to `None`. # + [markdown] id="M980eD0EKLkE" # ### Fit the Tokenizer to the text # We can now tokenize our text using the `fit_on_texts` method. This method takes a list of strings to tokenize, as we have prepared with `sentence_strings`. # + id="IeHabG-2KLkG" # Build the Tokenizer vocabulary tokenizer.fit_on_texts(sentence_strings) # + [markdown] id="JuaOOHQbKLkK" # The `fit_on_texts` method could also take a list of lists of strings, and in this case it would recognise each element of each sublist as an individual token. # + [markdown] id="wYjsM6KKKLkL" # ### Get the Tokenizer configuration # Now that the Tokenizer has ingested the data, we can see what it has extracted from the text by viewing its configuration. # + id="L_18VT6GKLkM" # Get the tokenizer config as a python dict tokenizer_config = tokenizer.get_config() tokenizer_config.keys() # + id="gNSua0oIKLkQ" # View the word_counts entry tokenizer_config['word_counts'] # + [markdown] id="5OJnHsh2KLkU" # The above is the number of times each word appears in the corpus. As you can see, the word counts dictionaries in the config are serialized into plain JSON. The `loads()` method in the Python library `json` can be used to convert this JSON string into a dictionary. # + id="nUa_g68nKLkV" # Save the word_counts as a python dictionary import json word_counts = json.loads(tokenizer_config['word_counts']) # + [markdown] id="KOtvDfpTKLkZ" # The word index is derived from the `word_counts`. # + id="sjGlx6-WKLka" # View the word_index entry tokenizer_config['word_index'] # + id="MQRaKl1uKLke" # Save word_index and index_word as python dictionaries index_word = json.loads(tokenizer_config['index_word']) word_index = json.loads(tokenizer_config['word_index']) # + [markdown] id="-zRqW1aIKLki" # ## Map the sentences to tokens # You can map each sentence to a sequence of integer tokens using the Tokenizer's `texts_to_sequences()` method. As was the case for the IMDb data set, the number corresponding to a word is that word's frequency rank in the corpus. # + id="_FJMEaNXKLkj" # View the first 5 sentences sentence_strings[:5] # + id="ns_ah7NBKLkn" # Tokenize the data sentence_seq = tokenizer.texts_to_sequences(sentence_strings) # + id="abrMrAEPKLks" # The return type is a list type(sentence_seq) # + id="dbJNeMG0KLkw" # View the first 5 tokenized sentences sentence_seq[0:5] # + id="zRdO_zJ0KLkz" # Verify the mappings in the config print(word_index['chapter'], word_index['i']) print(word_index['three'], word_index['invalids']) print(word_index['sufferings'], word_index['of'], word_index['george'], word_index['and'], word_index['harris']) print(word_index['a'], word_index['victim'], word_index['to'], word_index['one'], word_index['hundred'], word_index['and'], word_index['seven'], word_index['fatal'], word_index['maladies']) print(word_index['useful'], word_index['prescriptions']) # + [markdown] id="mIontG0TKLk2" # ## Map the tokens to sentences # + [markdown] id="PUU1lvl9KLk3" # You can map the tokens back to sentences using the Tokenizer's `sequences_to_texts` method. # + id="As4ZEYf-KLk4" # View the first 5 tokenized sentences sentence_seq[0:5] # + id="M0XqMXQMKLk6" # Map the token sequences back to sentences tokenizer.sequences_to_texts(sentence_seq)[:5] # + id="yJCs4zfLKLk9" # Verify the mappings in the config print(index_word['362'], index_word['8']) print(index_word['126'], index_word['3362']) print(index_word['2319'], index_word['6'], index_word['36'], index_word['3'], index_word['35']) print(index_word['5'], index_word['1779'], index_word['4'], index_word['43'], index_word['363'], index_word['3'], index_word['468'], index_word['3363'], index_word['2320']) print(index_word['2321'], index_word['3364']) # + id="kyKr-N6GKLlA" # Any valid sequence of tokens can be converted to text tokenizer.sequences_to_texts([[92, 104, 241], [152, 169, 53, 2491]]) # + [markdown] id="a83bOZpsKLlD" # If a word is not featured in the Tokenizer's word index, then it will be mapped to the value of the Tokenizer's `oov_token` property. # + id="nRMvm1REKLlE" # Tokenize unrecognised words tokenizer.texts_to_sequences(['i would like goobleydoobly hobbledyho']) # + id="P8rks57hKLlH" # Verify the OOV token index_word['1'] # + [markdown] id="1bpcRoTkKLlK" # ## Further reading and resources # * https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer # * https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html # + id="4KtngT77ek5B"
8,811
/modelImplementation/.ipynb_checkpoints/CreateMovieNeighbors-checkpoint.ipynb
38b0bca8ba2197337c10be8c2b8a2c15f208c884
[]
no_license
parmeetsingh08/Z604_FINAL_PROJECT_NETFLIX_TEAM2
https://github.com/parmeetsingh08/Z604_FINAL_PROJECT_NETFLIX_TEAM2
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
74,275
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Plotly # # https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf # # ## Using it online # + #import plotly #plotly.tools.set_credentials_file(username='DemoAccount', api_key='lr1c37zw81') # - # ## Using it offline # import plotly import plotly.offline as py import plotly.graph_objs as go import numpy as np import pandas as pd # + trace0 = go.Scatter( x=[1, 2, 3, 4], y=[10, 15, 13, 17] ) trace1 = go.Scatter( x=[1, 2, 3, 4], y=[16, 5, 11, 9] ) data = [trace0, trace1] plotly.offline.iplot(data) # - #using json/dict plotly.offline.iplot({ "data": [go.Scatter(x=[1, 2, 3, 4], y=[4, 3, 2, 1])], "layout": go.Layout(title="hello world") }) # ## Tables # + import plotly.figure_factory as ff df = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/school_earnings.csv") table = ff.create_table(df) plotly.offline.iplot(table) # - df.head() # + [markdown] toc-hr-collapsed=false # ## Bar Chart # + trace1 = go.Bar( x=[1, 2, 3, 4], y=[16, 5, 11, 9] ) data = [trace1] plotly.offline.iplot(data) # + data = [go.Bar( x=[20, 14, 23], y=['giraffes', 'orangutans', 'monkeys'], orientation = 'h' )] py.iplot(data) # + trace1 = go.Bar(x=df.School,y=df.Gap) data = [trace1] py.iplot(data) # - # ## Layout # + trace_women = go.Bar(x=df.School, y=df.Women, name='Women', marker=dict(color='#ffcdd2')) trace_men = go.Bar(x=df.School, y=df.Men, name='Men', marker=dict(color='#A2D5F2')) trace_gap = go.Bar(x=df.School, y=df.Gap, name='Gap', marker=dict(color='#59606D')) data = [trace_women, trace_men, trace_gap] layout = go.Layout(title="Average Earnings for Graduates", xaxis=dict(title='School'), yaxis=dict(title='Salary (in thousands)')) fig = go.Figure(data=data, layout=layout) py.iplot(fig,) # - # ## Line Chart # + N = 500 x = np.linspace(0, 1, N) y = np.random.randn(N) df = pd.DataFrame({'x': x, 'y': y}) df.head() data = [ go.Scatter( x=df['x'], # assign x as the dataframe column 'x' y=df['y'] ) ] py.iplot(data) # - data = [ go.Scatter( x=df['x'], # assign x as the dataframe column 'x' y=df['y'], name="A label here" ), go.Scatter( x=df['x']*2, # assign x as the dataframe column 'x' y=df['y']*2, name="Other here" ) ] py.iplot(data) # ## Scatter Chart (line, markers, lines+markers) # + N = 100 random_x = np.linspace(0, 1, N) random_y0 = np.random.randn(N)+5 random_y1 = np.random.randn(N) random_y2 = np.random.randn(N)-5 # Create traces trace0 = go.Scatter( x = random_x, y = random_y0, mode = 'markers', name = 'markers' ) trace1 = go.Scatter( x = random_x, y = random_y1, mode = 'lines+markers', name = 'lines+markers' ) trace2 = go.Scatter( x = random_x, y = random_y2, mode = 'lines', name = 'lines' ) data = [trace0, trace1, trace2] py.iplot(data, filename='scatter-mode') # - # ## Pie Chart # + labels = ['Oxygen','Hydrogen','Carbon_Dioxide','Nitrogen'] values = [4500,2500,1053,500] trace = go.Pie(labels=labels, values=values) py.iplot([trace]) # - # ## Bubble Chart # + trace0 = go.Scatter( x=[1, 2, 3, 4], y=[10, 11, 12, 13], mode='markers', marker=dict( size=[40, 60, 80, 100], ) ) data = [trace0] py.iplot(data) # + data = [ { 'x': [1, 3.2, 5.4, 7.6, 9.8, 12.5], 'y': [1, 3.2, 5.4, 7.6, 9.8, 12.5], 'mode': 'markers', 'marker': { 'color': [120, 125, 130, 135, 140, 145], 'size': [15, 30, 55, 70, 90, 110], 'showscale': True } } ] py.iplot(data) # - # ## Area Chart # + trace1 = go.Scatter( x=[1, 2, 3, 4], y=[0, 2, 3, 5], fill='tozeroy' ) trace2 = go.Scatter( x=[1, 2, 3, 4], y=[3, 5, 1, 7], fill='tonexty' #toself ) data = [trace1, trace2] py.iplot(data, filename='basic-area') # - # ## Radar Chart # + trace1 = go.Scatterpolar( r = [39, 28, 25, 7, 28, 39], theta = ['A','B','C', 'D', 'E', 'A'], fill = 'toself' ) data = [trace1] layout = go.Layout( polar = dict( radialaxis = dict( visible = True, range = [0, 50] ) ), showlegend = False ) fig = go.Figure(data=data, layout=layout) py.iplot(fig) # - trace2 = go.Scatterpolar( r = [1.5, 10, 39, 31, 15, 1.5], theta = ['A','B','C', 'D', 'E', 'A'], fill = 'toself', name = 'Group B' ) data = [trace1,trace2] fig = go.Figure(data=data, layout=layout) py.iplot(fig) # ## Histograms # + N = 500 x = np.linspace(0, 1, N) y = np.random.randn(N) df = pd.DataFrame({'x': x, 'y': y}) df.head() data = [go.Histogram(y=df['y'])] py.iplot(data) # - # ## Box Plot # # Plotly Controls # + import numpy as np data = [dict(visible = False,line=dict( width=6), name = '𝜈 = '+str(step), x = np.arange(0,10,0.01), y = np.sin(step*np.arange(0,10,0.01))) for step in np.arange(0,5,0.1)] data[10]['visible'] = True steps = [] for i in range(len(data)): step = dict( method = 'restyle', args = ['visible', [False] * len(data)], ) step['args'][1][i] = True # Toggle i'th trace to "visible" steps.append(step) sliders = [dict( active = 10, currentvalue = {"prefix": "Frequency: "}, pad = {"t": 50}, steps = steps )] layout = dict(sliders=sliders) fig = dict(data=data, layout=layout) py.iplot(fig) # - # ## Aditional Features from IPython.display import YouTubeVideo YouTubeVideo("wupToqz1e2g")
5,974
/Assignment1.ipynb
a23a3929499af3ccdf9aa0fb5be26fb20e067e52
[]
no_license
WawiraCarol/ads_assignment1
https://github.com/WawiraCarol/ads_assignment1
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
117,917
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd wsb_full_2020 = pd.read_pickle('../Data/2020wsb.pkl') wsb_full_2021 = pd.read_pickle('../Data/2021wsb.pkl') wsb_full = wsb_full_2020.append(wsb_full_2021) # - wsb_full.to_csv("wsb_full.csv") wsb_full.removed_by_category.value_counts() not_deleted = wsb_full [wsb_full.removed_by_category.isnull()] deleted = wsb_full [~wsb_full.removed_by_category.isnull()] len(not_deleted) not_deleted.id upv_deleted = deleted.sort_values( by = 'score')[['title', 'id', 'removed_by_category', 'score']] upv_deleted.score.plot(kind = 'bar') upv_deleted[ upv_deleted.score > 1] wsb_full = wsb_full.set_index('id') pd.set_option("display.max_rows", 300, "display.max_columns", None) wsb_full.loc['eipw3g'] wsb_full.loc['n6l7lp'] wsb_full [ wsb_full.removed_by_category list(wsb_full.columns) # + #wsb = pd.read_pickle('../Data/wsb_cleaned.pkl') #wsb.reset_index() # + #wsb[['ups', 'upvote_ratio']] # - ignored = wsb [ (wsb.ups == 1) & (wsb.upvote_ratio == 1.00)] pd.set_option('display.max_colwidth', 400) deleted = wsb [ wsb.selftext.isin( ["[deleted]\n", "[removed]\n"]) ] len(deleted) ignored_not_deleted = ignored [ ~ ignored.selftext.isin( ["[deleted]\n", "[removed]\n"]) ] ignored_not_deleted[['title', 'selftext', 'id', 'created_datetime_utc']]#.sort_values(by = 'created_datetime_utc') jan1 = wsb[ (wsb.created_datetime_utc.dt.dayofyear == 1) & ( wsb.created_datetime_utc.dt.year == 2021) ] jan1 = wsb[ (wsb.created_datetime_utc.dt.weekofyear == 1) & ( wsb.created_datetime_utc.dt.year == 2021) ] jan1.ups.value_counts().plot(kind = 'bar') jan1['ups_proportion'] = jan1.ups / jan1.ups.sum() jan1['ups_proportion'].sort_values().tail(60) jan1.ups.value_counts()
1,980
/.ipynb_checkpoints/diff_xml-checkpoint.ipynb
ff9421de57fb321db3f991db79b4000f2cc0b795
[]
no_license
divya1md/anderson
https://github.com/divya1md/anderson
0
0
null
null
null
null
Jupyter Notebook
false
false
.py
35,138
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import xml.etree.ElementTree as ET from xlwt import Workbook import xlwt #Workbook is created wb = Workbook() import pandas as pd #add_sheet is used to create sheet. # sheet1 = wb.add_sheet('Sheet 1', cell_overwrite_ok=True) tree = ET.parse('E:/offc_work/anderson/xmltcsv/Quick Quote-20191025T162809Z-001/C_13556.xml_2019_09_27_15_06_28_086.xml') root = tree.getroot() q,q1,q2,q3,q4,q5,q6,q7,q8,q9,q10,q11,q12=[],[],[],[],[],[],[],[],[],[],[],[],[] for child in root.findall('./LineItemMasters/LineItemMaster/LineItems/LineItem'): print(child.tag) a=child.find("LineItemNumber").text print(a) q=q+[a] b=child.find("LineItemName").text q1=q1+[b] c=child.find("ConfiguratorType").text q2=q2+[c] d=child.find("ConfiguratorPKVersion").text q3=q3+[d] f=child.find("LineItemID").text q5=q5+[f] g=child.find("ParentLineItemID").text q6=q6+[g] h=child.find("LinePrice").text q7=q7+[h] i=child.find("Cost").text q8=q8+[i] j=child.find("Description").text q9=q9+[j] k=child.find("Quantity").text q10=q10+[k] m=child.find("LongDescription").text q12=q12+[m] for child in root.findall('./LineItemMasters/LineItemMaster/LineItems/LineItem/LICustomDataXML'): print(child.tag) l=child.find("EDIDescription").text q11=q11+[l] df=pd.DataFrame() df['LineItemNumber']=q df['LineItemName']=q1 df['ConfiguratorType']=q2 df['ConfiguratorPKVersion']=q3 df['LineItemID']=q5 df['ParentLineItemID']=q6 df['LinePrice']=q7 df['Cost']=q8 df['Description']=q9 df['Quantity']=q10 df['EDIDescription']=q11 df['LongDescription']=q12 print(df['LongDescription']) df.to_excel('E:/offc_work/anderson/xmltcsv/Quick Quote-20191025T162809Z-001/C_13556.xml_2019_09_27_15_06_28_086.xlsx') # + import pandas as pd df1 = pd.read_excel('B_13436.xml_2019_08_21_13_44_50_529.xlsx') df2 = pd.read_excel('C_13556.xml_2019_09_27_15_06_28_086.xlsx') # + import re for i in range(0, len(df1["EDIDescription"])): s=df1["EDIDescription"][i] k=re.search(r'{(.*?)}', s).group(1) df=k+str(df1["LineItemNumber"][i]) df=df.strip("#") df1["UID"][i]=df print(df1) df1.to_excel("B1.xlsx",index=False) # -
2,469
/stm_analysis/stm_analysis_notebook.ipynb
9ed7ebc0c467bfb32e6c3a349cb15748e9a4e7ee
[]
no_license
tobias-gill/STM_flatfile_analysis
https://github.com/tobias-gill/STM_flatfile_analysis
2
1
null
2017-05-17T12:26:08
2017-05-09T20:04:14
Python
Jupyter Notebook
false
false
.py
28,851
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=true editable=true # Version control: # Last updated Jupyter notebook: 15-05-2017 # Compatible MATRIX versions: 3.3.1, 3.0. # Python version: 3.6.1 # # Authors: # Procopios Constantinou & Tobias Gill # London Centre for Nanotechnology # [email protected] # [email protected] # + [markdown] deletable=true editable=true # # STM data summary and analysis platform # # ### Contents # * [0 - Installing Jupyter Notebook](#0) # * [1 - Data selection](#1) # * [2 - Topography analysis](#2) # * [3 - Spectroscopy I(V) analysis](#3) # * [4 - Current-distance I(z) analysis](#4) # * [5 - Current-imaging-tunneling spectroscopy (CITS) analysis](#5) # * [6 - Supplementary information](#6) # + [markdown] deletable=true editable=true # This is a [Jupyter notebook](http://jupyter.readthedocs.io/en/latest/) that summarises and analyses any data obtained from STM experiments performed at the [London Centre for Nanotechnology (LCN)](https://www.london-nano.com/). The raw data can take the form of a topography scan (*.Z_flat* file), a spectroscopy scan (*.I(V)_flat* file) and a current-distance scan (*.I(Z)_flat* file) - all of which are displayed and analysed within this Jupyter notebook. # # There are two essential requirements for this Jupyter notebook to run without any issues: # - The initial raw MATRIX files must be converted to flat-files by using the [Vernissage](http://www.scientaomicron.com/en/software-downloads-matrix-spm-control/55) software, available by Scienta Omicron, for them to be viewed and/or analysed by this Jupyter notebook. More importantly, this will then allow you to use Vernissage as a data reduction tool, such that all the *good* and *sensible* data can be imported into this Jupyter notebook for viewing and subsequent analysis. # - The path to the parent directory, that holds all the data directories, each of which contain all the flat-files, must be defined by the *dataPath* variable and the path to the directory that contains the stm-analysis module must be defined by the *modulePath* variable. # # # This Jupyter notebook uses a *minimalistic* and *simplistic* interface so that you can get started right away, even with no prior training to the Python language. # + [markdown] deletable=true editable=true # ## 0 - Installing Jupyter Notebook <a class="anchor" id="0"></a> # While Jupyter runs code in many programming languages, Python is a requirement (Python 3.3 or greater, or Python 2.7) for installing the Jupyter Notebook. For new users, it is highly recommended [installing Anaconda](https://www.continuum.io/downloads). Anaconda conveniently installs Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science. It essentially is all you need, for either Mac or Windows. # # Use the following installation steps: # # 1. Download [Anaconda](https://www.continuum.io/downloads) and it is recommended to download Anaconda’s latest Python 3 version (currently Python 3.6.1). # 2. Install the version of Anaconda which you downloaded, following the instructions on the download page. # 3. Once Anaconda has been downloaded, run the Anaconda (or Anaconda - Navigator) application and on the home-page, select install Jupyter notebook. # 4. Congratulations, you have installed Jupyter Notebook and can get started right away! # # *Hint: All you need to know is the < Shift > < Enter > command that runs a cell of Python code within the Jupyter notebook.* # + [markdown] deletable=true editable=true # ## 1 - Data selection <a class="anchor" id="1"></a> # This first section of the Jupyter notebook is critical, as the data you select here is what will be subsequently displayed and analysed. Furthermore, this is the only section upon which all others are dependent upon because all of the analysis sections run completely independent from one another. # # # You should make sure the correct file path is written for both the *dataPath* and *modulePath* variables; # - *dataPath*: The path to the directory that holds the **folders** of all the different STM flat-file data. # - *modulePath*: The path to the directory that holds the **stm_analysis.py** script, which yields all the classes and functions to perform all the data-viewing and analysis. # # # If this is done correctly, the code in this section will run smoothly and the output will be a set of iPython button widgets (whose labels are identical to the folder names) that will allow you to select which folder of flat-file data should be loaded as the *data* object, which will hold all of the data from the chosen directory. One important thing to note is that if you select a different data directory during the analysis, all the analysis code will need to be restarted again (easiest to do this by going to the menu and selecting 'Kernel > Restart and Run all'). # + [markdown] deletable=true editable=true # The true power of this Jupyter notebook is that it allows you to load in **multiple folders simultaneously**, from **completly different *dataPath* directories**. If you wish to exploit this, all you need to do is follow the convention laid out here: # - If you wish to select multiple folders of data from the **same** *dataPath* directory, then you can create multiple *data* objects (labelled *data_1*, *data_2*, ..., *data_N*) from the same *dataPath*, which can each be called by '*data_N = stm.DataSelection(dataPath)*'. # - If you wish to select multiple folders of data from **different** *dataPath* directories, then you can define each *dataPath* explicitly (labelled *dataPath_1*, *dataPath_2*, ..., *dataPath_N*) and then create unique *data* objects (labelled *data_1*, *data_2*, ..., *data_N*) associated with each *dataPath* defined. This can be called by '*data_N = stm.DataSelection(dataPath_N)*'. # - Finally, all subsequent viewing and data analysis on all these different *data_N* objects can be performed by passing each *data_N* object through the stm analysis code, within the same cells. This will then display all the output figures adjacent to eachother, allowing them to be easily and simultaneously compared. # + deletable=true editable=true # Loading in the file path the stm analysis module modulePath = '/Users/pconstantinou/Documents/Prog_GitHub/STM_flatfile_analysis/stm_analysis/' # Loading in the file path to data_1 directories dataPath_1 = '/Users/pconstantinou/Documents/stm_data/' # + deletable=true editable=true # Forcing all figures to be plotted in-line throughout the JuPyter notebook # %matplotlib inline # Importing all the necessary python modules import sys # Import the system parameters sys.path.insert(0, modulePath) # Change the working directory to that of the stm analysis modules import stm_analysis as stm # Import the stm-analysis code # + deletable=true editable=true # Define the data objects that will extract all STM data from the selected data directories data_1 = stm.DataSelection(dataPath_1) # + [markdown] deletable=true editable=true # *Hint: If a new dataPath directory is defined, the Python code must be executed from the top again, so that it loads in the changes.* # + [markdown] deletable=true editable=true # ## 2 - Topography analysis <a class="anchor" id="2"></a> # + [markdown] deletable=true editable=true # This section will explore the analysis of the STM topography scans that were obtained from STM experiments. This is done by using the *stm.STT(data_N, type)* function, which loads in the *data_N* object from Section 1 and executes a specific *type* of analysis. All of the relevant Python code executes the analysis in the background and in real-time, as the widget selections are changed. This is the only section that is split into multiple layers, given the vast amount of permutations that the topography analysis can take. The different layers of the analysis are (i) Leveling operations, (ii) Image operations, (iii) 1D line-profiles and (iv) Fast-fourier transforms. A detailed explanation of each stage of the topogrphy analysis and the operations available are discussed below. # + [markdown] deletable=true editable=true # ### 2.1 - Leveling and Image operations # + [markdown] deletable=true editable=true # global plane, # local plane, # linewise subtraction, # three-point RoI, # polynomial background removal, # zeroing the bottom of the stm plot. # Print out the dictionary of information next to the figure # + deletable=true editable=true slideshow={"slide_type": "-"} # Analysing all the topography scans for the selected directory stt_1 = stm.STT(data_1) # + [markdown] deletable=true editable=true # ### 2.2 - 1D line profiles # + [markdown] deletable=true editable=true # Line profile over given P1 and P2 points in nanometers, # Potential to fit a Gaussian, Lorentzian to the line profile and return its maximum height and std. dev. # Line profile analysis to fit sinusoid with Gaussians. # + deletable=true editable=true # Defining the line profile across the stm topography scan stt_line_1 = stm.STT_lineprof(stt_1) # - # Analysing the line profile across the stm topography scan def f(x, a, b, c) import numpy as np return a*np.sin(b*x+c) # ### 2.3 - 1D line profile statistics # + import numpy as np points = np.array([[1, 1], [6, 5]]) A = stt_line_1.nm2pnt(points[1][0], stt_line_1.topo_data) np.max(stt_line_1.line_prof_y) # + [markdown] deletable=true editable=true # ### 2.3 - Fast Fourier Transform # + [markdown] deletable=true editable=true # Perform a fourier transform of the stm topography image produced. # + deletable=true editable=true # + [markdown] deletable=true editable=true # ### 2.5 - 3D topography profile # + [markdown] deletable=true editable=true # 1D and 2D FFT filtering # + [markdown] deletable=true editable=true # ## 3 - Spectroscopy $I(V)$ analysis <a class="anchor" id="3"></a> # This section will explore the analysis of $I(V)$ spectroscopy curves that were obtained from STS experiments. This is done by using the *stm.STS(data_N)* function, which loads in the *data_N* object from Section 1. All of the relevant Python code executes the analysis in the background and in real-time, as the widget selections are changed. A detailed explanation of each stage of the $I(V)$ curve analysis and the operations available are discussed below. # + deletable=true editable=true # Analysing all the STM spectroscopy curves for the selected directory sts_1 = stm.STS(data_1) # + [markdown] deletable=true editable=true # ### 3.1 - *Raw $I(V)$ files* # The first step is to load in the raw $I(V)$ curves that you wish to browse through, or analyse, by using the '*Raw $I(V)$ files*' scrollable selection box, located on the far left of the interaction pane. This selection box gives you various amounts of flexibility in regards to data selection: # - Single $I(V)$ files can be loaded by simply clicking on the file you wish to view. # - Multiple $I(V)$ files can be loaded in simultaneously using three methods; # 1. <*Ctrl*> (or <*Command*> on Mac) *clicking* on multiple, individual files loads in specific selections of $I(V)$ curves. # 2. <*Shift*>* clicking* allows entire blocks of $I(V)$ files to be loaded in. # 3. <*Ctrl*>* + A *(or <*Command*>* + A *on Mac) loads in every single $I(V)$ file. # # *Note: current tests have been performed with over 400 $I(V)$ curves being loaded in simultaneously and the analysis runs fine, but it may take about ~5-10s to execute completly.* # # The analysis takes into consideration all of the $I(V)$ curves selected, even if they have *different bias ranges* and *grid spacings* between their respective voltage domains; # - If multiple $I(V)$ curves are selected with different *bias ranges*, the Python program automatically determines a mutually consistent domain between all the selected $I(V)$ curves. # - If any of the $I(V)$ curves selected then have different *grid spacings*, a linearly interpolation is performed, so that they can be sampled onto the mutually consistent voltage domain. # # Therefore, the Python program essentially performs a cross correlation analysis between all of the selected $I(V)$ curves to ensure consistency in the voltage domain *range* and *grid spacing*. # # *Note: The voltage domain of the $I(V)$ curves can be selectively controlled by using the 'Restrict $V_{bias}$' slider, which is provided so that the $I(V)$ curves can be easily cropped. This allows you to do two things; (i) any anomalous data or maxed out data can be rejected, or (ii) the voltage bias domain can be restricted so that the maximum tunneling current is identical in the +/- bias regimes. Any $I(V)$ data that is cropped out is displayed as grey points in the corresponding 'Raw $I(V)$'figures.* # # # ### 3.2 - *$I(V)$ analysis* # The $I(V)$ spectroscopy analysis is split up into three main constituents: # # **1. Intermediate plots: ** # This performs the full analysis on all the $I(V)$ curves that have been selected and it's corresponding figure shows each stage of the analysis, which follows the steps outlined below: # - **Averaging**: A global average is determined from all the $I(V)$ curves that have been selected. If '*Both*' traces are selected, then the average is taken over all the *traces* and *retraces*, however, the '*Traces*' option is there to allow the data analysis to be executed over just the '*Trace*' or '*Retrace*' $I(V)$ curves, if necessary. # - **Smoothing**: There is the option to provide no smoothing at all, but there are two additional options for either *Binomial* and *Savitzky-Golay* smoothing. The Binomial smoothing is effectivly a Gaussian filter and the '*Smoothing order*' option controls the window size over which the Binomial smoothing is performed. The Savitzky-Golay smoothing is the default smoothing method, as it is found to provide much better smoothing in regards to the raw $I(V)$ data. The '*Smoothing order*' option here controls the running polynomials order, with a fixed window size of 51 points. # - **Differentiating**: The $dI/dV$ curve that is displayed is the $dI/dV$ curve of the averaged and smoothed raw $I(V)$ data that has been selected. There are two important features that are included in the $dI/dV$ curve; (i) the entire $dI/dV$ curve has been *globally offset along the y-axis* by 1.1 times its minimum value, directly after differentiation, to ensure that there are no negative data points (as they would not be displayed on the semi-log plot of $dI/dV$), (ii) the variance is calculated by finding the difference between the mean $[I(V)]^2$ curves and the mean of the raw $I(V)$ curves. # # # **2. Point STS: ** # This performs the analysis generally associated with spectroscopy curves that were taken over specific points on the sample surface and it's corresponding figure shows all the raw $I(V)$ curves selected, along with the best estimate for the $dI/dV$ curve and it's band-gap. # # The most important feature associated with this analysis is the '*Band-gap*' slider, which allows you to selectivly define the location and range of the best estimate to the band gap. The Python program then determines the 1$\sigma$ and 2$\sigma$ estimations, based on the band-gap you have defined. The band-gap calculations are as follows: # - The voltage domain of the band-gap is directly selected and it's the length, along the voltage domain, defines the band gap. The constant y-axis position is determined by taking the average of the $dI/dV$ curve, over everything that is *within the band-gap window*. # - The 1$\sigma$ and 2$\sigma$ values of $dI/dV$ are then determined directly from the standard deviation of the $dI/dV$ data that lies *within the band-gap window*, and this can be transposed to get the associated values of the 1$\sigma$, and 2$\sigma$, VBM and CBM positions. # - All the information associated with the band-gap calculations is shown to the right of the corresponding figure. # # *Note: The band-gap calculator is very sensitive to the quality of data that is used and it should always be aimed to get a $dI/dV$ curve that looks like a 'V' or 'U'. Bad quality $dI/dV$ curves are ones that look like an 'M', which has minima that are much lower than that of the band-gap itself. In order to rectify this issue with bad data, it is recommended to cut the domain of the $I(V)$ curves (using the *Restrict V* slider), such that these spurious regions are deleted from the edges. This will ensure that the band-gap calculations will always work. # * # # # **3. Line STS: ** # This performs the analysis generally associated with spectroscopy curves that were taken over specific line-profiles on the sample surface (usually over some defect or step edges) and it's corresponding figures shows all the stacked $dI/dV$ curves in comparison the mean $dI/dV$ curve, but also an image of a train of the $dI/dV$ data in the form of an image. # # The two associated figures with this analysis demonstrate how the $dI/dV$ curves change as a function of the selected $I(V)$ curves; # - The *left* figure shows a comparison between all the individual $dI/dV$ curves and their corresponding mean. This gives an illustration of how the $dI/dV$ curves change over the different $I(V)$ curves selected. # - The *right* figure shows a train of all the $dI/dV$ curves, stacked in ascending order of file name. This gives a direct illustration of how the $dI/dV$ curves change over the different $I(V)$ curves selected, in the form of a CITS slice. Additionally, the VBM and CBM edges are displayed from the previous band-gap calculations performed in the 'Point STS' analysis and given you have a sufficient amount of $I(V)$ curves (~50+), the band-gap can be checked for consistency from the image. # # *Note: This analysis section is versatile because it can be used to compare various $I(V)$ curves that were either taken over identical or different regions and see directly how the $dI/dV$ curves change. Hence, this allows you to get good estimates of the band-gap, given that you have obtained a sufficient (~50+) amount of repeats, but to also identify any surface states that exist when $I(V)$ curves are taken alone a pristine-defect-pristine line. # * # # # # ### 3.3 - *Axes controls* # Finally, the '*Axes Controls*' are located on the far-right of the interaction pane and the default condition is that all the axes will *auto-scale* to a sensible size, given the selected I(V) files that have been loaded in. If you wish to change the limits on both the *x-* and *y-*axes directly, you can do this by selecting the '*Axes limit*' button; # - The voltage bias *V* slider simply controls the limits over the voltage-domain for all of the figures. # - The tunneling current *I* slider controls the symmetrical value of the tunneling current limit along the '*y-*axes' of the figures. # - The *dI/dV* slider controls the maximum value of *dI/dV* that appears in the figures and, by default, its minimum is taken at the location of the minimum *dI/dV* value. One important side-note is that you can use the '*dI/dV*' slider as a method to actively control the contrast of the image formed in the *Line STS* analysis section too. # # # *Note: If the axes are made smaller than the data being displayed in the figure, the data *does not get deleted or chopped*, rather it just remains invisible and off the axis.* # + [markdown] deletable=true editable=true # ## 4 - Current-distance I(z) analysis <a class="anchor" id="4"></a> # + [markdown] deletable=true editable=true # This section will explore the analysis of $I(Z)$ curves by using the *stm.STZ(data_N)* function, which loads in the *data_N* object from Section 1. All of the relevant Python code executes the analysis in the background and in real-time, as the widget selections are changed. A detailed explanation of each stage of the $I(Z)$ curve analysis and the operations available are discussed below. # + deletable=true editable=true # Analysing all the STM spectroscopy curves for the selected directory` stz_1 = stm.STZ(data_1) # + [markdown] deletable=true editable=true # The zero point of the I(Z) curve is the position that the tip is when it reaches the set-point. It does not give the tip-sample distance! But the I(Z) curve can be used as calibration, provided that the same set point is used for all the scans. # # Make a plot of I(Z) as a function of the voltage bias and then you can make a plot of Kappa as a function of the voltage bias, which then gives you the work fucntion as a fuction of voltage bias, from which you can extract the extent of the band bending as a fucntion of the voltage bias, which you can then potentially use to correct the geometry of a CITS scan too. # + [markdown] deletable=true editable=true # ## 5 - Current-imaging-tunneling spectroscopy (CITS) analysis <a class="anchor" id="5"></a> # + [markdown] deletable=true editable=true # # CITS scans are completly seperate to all other scans. # Use matlab to get 3D image of the CITS map. # # # 5.1 - Raw CITS scans. # # # 5.2 - Topography correctd CITS (fixed kappa) # # 5.3 - Topography corrected CITS (using deltaz) # + [markdown] deletable=true editable=true # ## 6 - Supplementary information <a class="anchor" id="6"></a> # + [markdown] deletable=true editable=true # - *If you perform all the analysis and then change the selected folder in '1 - Data Selection' you will need to run the code consecutively again.* # # - *If you want to load in multiple files from different directories, this can be performed by creating a new Class that yield. * # # - *If you double click on any of the figures produced, it will zoom in.* # # - *Do not save CITS files in the same folder as STS curves because they have the same '.I(V)_flat' format which the Python program cannot distinguish between. Instead, create a seperate CITS directory with all the CITS scans placed inside that.*
22,315