content
stringlengths 73
1.12M
| license
stringclasses 3
values | path
stringlengths 9
197
| repo_name
stringlengths 7
106
| chain_length
int64 1
144
|
---|---|---|---|---|
<jupyter_start><jupyter_text>##### Copyright 2019 The TensorFlow Authors.<jupyter_code>#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.<jupyter_output><empty_output><jupyter_text># Dogs vs Cats Image Classification Without Image Augmentation
Run in Google Colab
View source on GitHub
In this tutorial, we will discuss how to classify images into pictures of cats or pictures of dogs. We'll build an image classifier using `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`.
## Specific concepts that will be covered:
In the process, we will build practical experience and develop intuition around the following concepts
* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class — How can we efficiently work with data on disk to interface with our model?
* _Overfitting_ - what is it, how to identify it?
**Before you begin**
Before running the code in this notebook, reset the runtime by going to **Runtime -> Reset all runtimes** in the menu above. If you have been working through several notebooks, this will help you avoid reaching Colab's memory limits.
# Importing packagesLet's start by importing required packages:
* os — to read files and directory structure
* numpy — for some matrix math outside of TensorFlow
* matplotlib.pyplot — to plot the graph and display images in our training and validation data
<jupyter_code>import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import matplotlib.pyplot as plt
import numpy as np
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)<jupyter_output><empty_output><jupyter_text># Data LoadingTo build our image classifier, we begin by downloading the dataset. The dataset we are using is a filtered version of Dogs vs. Cats dataset from Kaggle (ultimately, this dataset is provided by Microsoft Research).
In previous Colabs, we've used TensorFlow Datasets, which is a very easy and convenient way to use datasets. In this Colab however, we will make use of the class `tf.keras.preprocessing.image.ImageDataGenerator` which will read data from disk. We therefore need to directly download *Dogs vs. Cats* from a URL and unzip it to the Colab filesystem.<jupyter_code>_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
zip_dir = tf.keras.utils.get_file('cats_and_dogs_filterted.zip', origin=_URL, extract=True)<jupyter_output><empty_output><jupyter_text>The dataset we have downloaded has the following directory structure.
cats_and_dogs_filtered
|__ train
|______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ...]
|______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]
|__ validation
|______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ...]
|______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...]
We can list the directories with the following terminal command:<jupyter_code>zip_dir_base = os.path.dirname(zip_dir)
!find $zip_dir_base -type d -print<jupyter_output><empty_output><jupyter_text>We'll now assign variables with the proper file path for the training and validation sets.<jupyter_code>base_dir = os.path.join(os.path.dirname(zip_dir), 'cats_and_dogs_filtered')
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures<jupyter_output><empty_output><jupyter_text>### Understanding our dataLet's look at how many cats and dogs images we have in our training and validation directory<jupyter_code>num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)<jupyter_output><empty_output><jupyter_text># Setting Model ParametersFor convenience, we'll set up variables that will be used later while pre-processing our dataset and training our network.<jupyter_code>BATCH_SIZE = 100 # Number of training examples to process before updating our models variables
IMG_SHAPE = 150 # Our training data consists of images with width of 150 pixels and height of 150 pixels<jupyter_output><empty_output><jupyter_text># Data Preparation Images must be formatted into appropriately pre-processed floating point tensors before being fed into the network. The steps involved in preparing these images are:
1. Read images from the disk
2. Decode contents of these images and convert it into proper grid format as per their RGB content
3. Convert them into floating point tensors
4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.
Fortunately, all these tasks can be done using the class **tf.keras.preprocessing.image.ImageDataGenerator**.
We can set this up in a couple of lines of code.<jupyter_code>train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data<jupyter_output><empty_output><jupyter_text>After defining our generators for training and validation images, **flow_from_directory** method will load images from the disk, apply rescaling, and resize them using single line of code.<jupyter_code>train_data_gen = train_image_generator.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=BATCH_SIZE,
directory=validation_dir,
shuffle=False,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')<jupyter_output><empty_output><jupyter_text>### Visualizing Training imagesWe can visualize our training images by getting a batch of images from the training generator, and then plotting a few of them using `matplotlib`.<jupyter_code>sample_training_images, _ = next(train_data_gen) <jupyter_output><empty_output><jupyter_text>The `next` function returns a batch from the dataset. One batch is a tuple of (*many images*, *many labels*). For right now, we're discarding the labels because we just want to look at the images.<jupyter_code># This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip(images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5]) # Plot images 0-4<jupyter_output><empty_output><jupyter_text># Model Creation## Define the model
The model consists of four convolution blocks with a max pool layer in each of them. Then we have a fully connected layer with 512 units, with a `relu` activation function. The model will output class probabilities for two classes — dogs and cats — using `softmax`. <jupyter_code>model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(2)
])<jupyter_output><empty_output><jupyter_text>### Compile the model
As usual, we will use the `adam` optimizer. Since we output a softmax categorization, we'll use `sparse_categorical_crossentropy` as the loss function. We would also like to look at training and validation accuracy on each epoch as we train our network, so we are passing in the metrics argument.<jupyter_code>model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])<jupyter_output><empty_output><jupyter_text>### Model Summary
Let's look at all the layers of our network using **summary** method.<jupyter_code>model.summary()<jupyter_output><empty_output><jupyter_text>### Train the modelIt's time we train our network.
Since our batches are coming from a generator (`ImageDataGenerator`), we'll use `fit_generator` instead of `fit`.<jupyter_code>EPOCHS = 100
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(total_train / float(BATCH_SIZE))),
epochs=EPOCHS,
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(BATCH_SIZE)))
)<jupyter_output><empty_output><jupyter_text>### Visualizing results of the trainingWe'll now visualize the results we get after training our network.<jupyter_code>acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.savefig('./foo.png')
plt.show()<jupyter_output><empty_output>
|
no_license
|
/CNN/dogs_vs_cats_without_augmentation.ipynb
|
Rhavyx/TensorFlow_Notebooks
| 16 |
<jupyter_start><jupyter_text># SortieralgorithmenZu jedem Sortieralgorithmus habe ich euch noch eine kleine 'Visualisierung' gecodet.
Dabei wird gezeigt, was in welcher Iteration (meistens der äußeren Schleife) passiert.
Ihr aktiviert die Visualisierung, in dem ihr unter dem jeweiligen Algo das visual-Flag auf true
setzt.<jupyter_code>def sort(algo, algov, sentence = "sorting", visual=False):
"""
Selects between the standard sorting algorithm and a given
visual version.
Args:
algo (function) : a sorting algorithm
algov (function): visualized sorting algorithm
sentence (str) : a string to sort
visual (bool) : True if u want to see the visual algorithm
"""
li = list(sentence)
print('Before sort: "%s"' % sentence)
# select visual or non visual algorithm
# r is None for in-place algos
r = algov(li) if visual else algo(li)
print('After sort : "%s"' % ''.join(li if r is None else r))<jupyter_output><empty_output><jupyter_text>### Anzahl der Vergleiche
Wir benutzen die Anzahl der Vergleiche um Sortieralgorithmen zu vergleichen, dabei unterscheiden wir in der Regel
worst case, average case und best case## SelectionsortDie Grundidee hier ist, dass wir durch alle Elemente der Liste iterieren und für jedes Elemente suchen wir das kleinste
Element in der Restliste. Die Suche läuft dabei bis zum Ende durch, da wir nie sicher sein können,
dass einen Schritt weiter nicht noch ein kleineres Element kommt. Nach der inneren Schleife tauschen wir das aktuell
kleinste nach vorne und bewegen uns nach vorne.
Links von i ist somit eine 'fertig'-sortierte Teilliste rechts eine unsortierte Restliste.
Wir selektieren also immer das aktuelle Minimum der rechten Teilliste und bewegen es in ihr an die vorderste Position.
Danach verringern wir die rechte Teilliste um 1 von links.<jupyter_code>def selectionsort(a):
N = len(a)
for i in range(N):
minimum = i
for k in range(i + 1, N):
if a[k] < a[minimum]:
minimum = k
a[i], a[minimum] = a[minimum], a[i]<jupyter_output><empty_output><jupyter_text>### Komplexitätsanalyse
Bei Bubblesort, Insertionsort und Selectionsort ist in der Regel eine Herleitung über die iterativen Eigenschaften kürzer.
Ich persönliche fine die rekursiven etwas schöner, dennoch will ich euch die iterativen nicht vorenthalten.
### Herleitung mit Rekursion
Das erste was bei Selectionsort auffällt, ist, dass die innere Schleife immer bis zum Ende aufgeführt wird. Worst, Avg und
Best case fallen somit zusammen.
Sei $T(N)$ die Anzahl der Vergleiche für N Eingabeelemente, so ist $T(N)$ für Selectionsort:
$ T(N) = (N - 1) + T(N - 1) $
Für $T(N-1)$ kennen wir aber somit auch schon die Anzahl der Vergleiche:
$ T(N) = (N - 1) + (N - 2) + T(N - 2) $
Das setzt sich jetzt so fort ... aber wie lange? Sobald wir $T(N - (N - 1))$ haben, müssen wir uns überlegen was mit der
inneren Schleife passiert. Diese wird dann aber gar nicht mehr ausgeführt, also gilt $T(1) = 0$.
$ T(N) = (N - 1) + (N - 2) + (N - 3) + ... + 2 + 1 + 0 $
Mit der Gauß'schen Summenformel:
$ T(N) = \frac{N * (N - 1)}{2} = O(N^{2})$
### Herleitung durch Iteration
Man erkennt, dass die äußere Schleife N-Mal ausgeführt wird, wir erwarten also eine Summe mit N-Gliedern. Das erste Glied
repräsentiert die innere Schleife für i = 0, sie wird also (N-1)-Mal ausgeführt, danach (N-2)-Mal und so weiter, bis die
innere Schleife keinmal mehr durchlaufen wird. Daraus ergibt sich:
$ T(N) = (N - 1) + (N - 2) + ... + 2 + 1 + 0 $
Mit Gauß'scher Summenformel:
$ T(N) = \frac{N * (N - 1)}{2} = O(N^{2})$<jupyter_code>def selectionsort_visual(a):
from time import sleep
N = len(a)
for i in range(N):
minimum = i
for k in range(i + 1, N):
if a[k] < a[minimum]:
minimum = k
print(''.join(a))
s = list(' ' * N)
s[i], s[minimum] = 'i', 'm'
s = ''.join(s)
print(s)
sleep(2)
print(' ' * N)
a[i], a[minimum] = a[minimum], a[i]
sort(selectionsort, selectionsort_visual, visual = False)
print()
sort(selectionsort, selectionsort_visual, sentence = "0123456789", visual = False)
print()
sort(selectionsort, selectionsort_visual, sentence = "9876543210", visual = False)
print()
sort(selectionsort, selectionsort_visual, sentence = "0193284765", visual = False)<jupyter_output>Before sort: "sorting"
After sort : "ginorst"
Before sort: "0123456789"
After sort : "0123456789"
Before sort: "9876543210"
After sort : "0123456789"
Before sort: "0193284765"
After sort : "0123456789"
<jupyter_text>## InsertionsortÄhnlich wie Selectionsort haben wir eine äußere und eine innere Schleife, der Ansatz ist aber ein anderer. Der Algorithmus
vergleich immer von der aktuellen Position i mit allen vorherigen und sucht nach einer Lücke die er das Element einfügen
kann. Das Element wird zu dieser Lücke nach vorne getragen. Auch hier befindet sich somit eine Trennung von schon sortierter
Liste und unsortiertem Rest. Im Gegensatz zu Selectionsort kann sich dieser rechte Teil noch ändern, weil im avg case
ständig Elemente in die rechte Seite eingefügt werden.
Die innere Schleife läuft von i nach 0, bei Selectionsort lief sie nach hinten zu N.<jupyter_code>def insertionsort(a):
N = len(a)
for i in range(1, N):
cur, k = a[i], i
while k > 0:
if cur < a[k - 1]:
a[k] = a[k - 1]
else:
break
k -= 1
a[k] = cur
<jupyter_output><empty_output><jupyter_text>### Komplexitätsanalyse
Auch bei Insertionsort haben wir 2 Schleifen, anders aber als bei Selectionsort enthält die innere Schleife eine Bedingung,
die zum schnelleren Verlassen der Schleife führen kann. Eine Unterscheidung in best, worst und avg case ist notwenig.
### Herleitung mit Rekursion
Worst case
Die Bedingung in der inneren Schleife wird immer ausgeführt. Dies führt in der ersten Iteration dazu, dass die innere
Schleife exakt einmal durchlaufen wird. Es ergibt sich:
$ T(N) = 1 + T(N - 1) $
Im nächsten Durchlauf wird die innere Schleife zweimal ausgeführt:
$ T(N) = 1 + 2 + T(N - 2) $
Dies passiert bis die äußere Schleife auf N-1 steht, auch hier muss alles Durchlaufen werden:
$ T(N) = 1 + 2 + ... + T(N - (N - 1)) $
Hier ist $T(1) = N - 1$:
$ T(N) = 1 + 2 + ... + (N - 2) + (N - 1) $
Mit der Gauß'schen Summenformel:
$ T(N) = \frac{N * (N - 1)}{2} = O(N^{2})$
Best case
Die Bedingung in der inneren Schleife wird nur einmal ausgeführt und evaluiert zu false, was zur Folge hat, dass die
Schleife direkt breakt. Daraus ergibt sich für die erste Iteration:
$ T(N) = 1 + T(N - 1) $
Wie schon oben erwähnt gilt für jede weiter Iteration auch, dass die Schleife nur einmal durchlaufen wird. Dies sitzt sich
fort bis zu $ T(1) = 1$, da die äußere Schleife bei 1 startet, gibt es $N-1$ Summanden
$ T(N) = \underbrace{1 + 1 + ... + 1 + 1}_{(N-1)-Mal} = N-1 = O(N) $
Average case
Hier muss man abschätzen, wie oft die if-Bedingung ausgeführt wird. Im Mittel wird die Lücke immer genau in der Mitte liegen
somit ergibt sich die erste Iteration zu:
$ T(N) = \frac{1}{2} 1 + T(N - 1) $
Die nächste Iteration sieht dann wie folgt aus (beachtet, dass ihr wir das einhalb nicht ausmultiplizieren, da wir es gleich
ausklammern wollen):
$ T(N) = \frac{1}{2} 1 + \frac{1}{2} 2 + T(N - 2) $
Dies setzt sich wie im worst case bereits erklärt fort zu:
$ T(N) = \frac{1}{2} 1 + \frac{1}{2} 2 + ... + \frac{1}{2} (N - 2) + \frac{1}{2} (N - 1) $
Gauß'sche Summenformel:
$ T(N) = \frac{1}{2} \frac{N * (N - 1)}{2} = O(N^{2})$
### Herleitung durch Iteration
Worst case
Die äußere Schleife wird $N-1$-Mal ausgeführt, es gibt also genauso viele Summanden in unserer Ergebnisformel. Die innere
Schleife muss also aufsteigend von 1 bis $N-1$ ausgeführt werden, da sie maximale Durchläufe hat. Es ergibt sich folgende
Summe:
$ T(N) = 1 + 2 + ... + (N - 2) + (N - 1) $
Mit der Gauß'schen Summenformel:
$ T(N) = \frac{N * (N - 1)}{2} = O(N^{2})$
Best case
Die äußere Schleife wird $N-1$-Mal ausgeführt, es gibt also genauso viele Summanden in unserer Ergebnisformel. Die innere
Schleife wird immer direkt nach einer Abfrage abgebrochen - es gibt nur einen Vergleich pro Iteration.
$ T(N) = \underbrace{1 + 1 + ... + 1 + 1}_{(N-1)-Mal} = N-1 = O(N) $
Average case
Die äußere Schleife wird $N-1$-Mal ausgeführt - genau wie im worstcase wird die innere Schleife öfter ausgeführt - im
Schnitt im zur Hälfte zwischen aktueller Position und Start. Jeder Summand aus dem worstcase wird also mit $\frac{1}{2}$
multipliziert. Vereinfacht mit Gauß ergibt sich:
$ T(N) = \frac{1}{2} \frac{N * (N - 1)}{2} = O(N^{2})$<jupyter_code>def insertionsort_visual(a):
from time import sleep
N = len(a)
digits = len(str(N))
for i in range(1, N):
print(str(i).zfill(digits), ''.join(a))
cur, k = a[i], i
while k > 0:
print(' ' * digits, 'k:=' + str(k).zfill(digits), ''.join(a))
if cur < a[k - 1]:
a[k] = a[k - 1]
else:
break
k -= 1
a[k] = cur
sleep(2)
sort(insertionsort, insertionsort_visual, visual = False)
print()
sort(insertionsort, insertionsort_visual, sentence = "0123456789", visual = False)
print()
sort(insertionsort, insertionsort_visual, sentence = "9876543210", visual = False)
print()
sort(insertionsort, insertionsort_visual, sentence = "0193284765", visual = False)<jupyter_output>Before sort: "sorting"
After sort : "ginorst"
Before sort: "0123456789"
After sort : "0123456789"
Before sort: "9876543210"
After sort : "0123456789"
Before sort: "0193284765"
After sort : "0123456789"
<jupyter_text>## BubblesortBubblesort ist wohl einer der naivsten Sortieralgorithmen. Wenn die Aufgabe lautet "Sortiere die Liste aufsteigend", dann
ist der einfachste Ansatz wohl:"Suche das aktuell größte Element in der Liste und schiebe es an das Ende der Liste".
Genau das macht Bubblesort, die Elemente steigen wie Blasen auf.
Bubblesort ist ähnlich zu Selectionsort, nur dass hier die rechte Teilliste die schon fertig sortierte ist.<jupyter_code>def bubblesort(a):
for i in range(len(a), 0, -1):
for j in range(0, i - 1):
if a[j] > a[j + 1]:
a[j], a[j + 1] = a[j + 1], a[j]<jupyter_output><empty_output><jupyter_text>### Komplexitätsanalyse
Genau wie bei Selectionsort wird die innere Schleife durch keine Bedingung frühzeitig verlassen - worst, avg und best case
fallen somit zusammen.
Die Herleitung läuft im Endeffekt analog und ihr werdet wieder auf $T(N) = O(N^{2}) $ stoßen.<jupyter_code>def bubblesort_visual(a):
from time import sleep
N = len(a)
for i in range(N, 0, -1):
for j in range(0, i - 1):
if a[j] > a[j + 1]:
a[j], a[j + 1] = a[j + 1], a[j]
print(''.join(a))
s = list(' ' * N) + [' ']
s[i] = '^'
s = ''.join(s)
print(s)
sleep(2)
print(' ' * N)
sort(bubblesort, bubblesort_visual, visual = False)
print()
sort(bubblesort, bubblesort_visual, sentence = "0123456789", visual = False)
print()
sort(bubblesort, bubblesort_visual, sentence = "9876543210", visual = False)
print()
sort(bubblesort, bubblesort_visual, sentence = "0193284765", visual = False)<jupyter_output>Before sort: "sorting"
After sort : "ginorst"
Before sort: "0123456789"
After sort : "0123456789"
Before sort: "9876543210"
After sort : "0123456789"
Before sort: "0193284765"
After sort : "0123456789"
<jupyter_text>## MergesortDie bisherigen Sortieralgorithmen waren alle in ihrem vorgehen iterativ. Das schnellste bisher war $ O(N) $ in einem best,
case meistens aber eher $O(N^{2})$. Die Idee wie man das ganze noch besser hinbekommt ist folgende:
Man nutzt die Eigenschaft aus, dass kleine Listen sich besser sortieren lassen als große und entwickelt einen Algorithmus,
der eine große Liste in zwei kleinere zerteilt, diese seperat sortiert und danach wieder zusammengefügt.
Dafür definieren wir zwei Funktionen mergesort, diese Funktion sortiert eine gegebene Liste, in dem sie
die Listen aufteilt, sich einmal mit der linken und dann mit der rechten selbst aufruft und diese sortiert.
Am Ende wird die zusammengefügte Ergebnisliste zurückgegeben. Hier sehen wir eine Neueiheit - die Eingabeliste wird nicht
direkt modifiziert sondern es gibt eine zusätzliche Liste.
Sortieralgorithmen die eine Liste direkt ändern, nennen wir in-place ansonsten out-of-place oder nicht in-
place (ich weiß nicht ob es eine offizielle Formulierung dafür gibt)
Damit die Rekursion in mergesort zu einem Ende kommt, brauchen wir eine Abbruchbedingung, deswegen definieren wir: Eine Liste mit maximal einem Element ist bereits sortiert. Bleibt nur noch die merge-Funktion, die zwei sortierte Teillisten zusammenfügt. Hierbei ist das Vorgehen so, dass immer das kleinste Element aus beiden Listen in die Ergebnisliste
hinzugefügt wird. Sobald eine Liste vollständig hinzugefügt wurde, wird die restliche verebleibende Liste einfach komplett
angefügt.
Algorithmen die ein Vorgehen in dieser Form aufweisen folgen dem "Divide and Conquer"-Prinzip. Diese Algorithmen
machen sich zu Nutze, dass die Tiefe des Baumes den sie erzeugen gering ist und ihr Aufwand maßgeblich durch diese
beeinflusst wird.<jupyter_code>def mergesort(a):
N = len(a)
if N <= 1:
return a
else:
left, right = a[0 : N//2], a[N//2 : N]
left_sorted, right_sorted = mergesort(left), mergesort(right)
return merge(left_sorted, right_sorted)
def merge(left, right):
res = []
while left and right:
if left[0] < right[0]:
res.append(left.pop(0))
else:
res.append(right.pop(0))
while left:
res.append(left.pop(0))
while right:
res.append(right.pop(0))
return res
a = list("0123456789")
mergesort(a)
print(a)
a = [int(x) for x in a]
mergesort(a)
print(a)
'''
def merge(left, right):
res = []
l, r = 0, 0
while l < len(left) and r < len(right):
if left[l] < right[r]:
res.append(left[l])
l += 1
else:
res.append(right[r])
r += 1
while l < len(left):
res.append(left[l])
l += 1
while r < len(right):
res.append(right[r])
r += 1
return res
'''
def mergesort(a):
n = len(a)
tmp = a[:]
merge_recursive(a, 0, n, tmp)
def merge_recursive(a, start, end, tmp):
if end - start > 1:
middle = start + (end - start) // 2
merge_recursive(a, start, middle, tmp)
merge_recursive(a, middle, end, tmp)
pos = start
i = start
j = middle
while pos < end:
if i < middle and (j == end or a[i] < a[j]):
tmp[pos] = a[i]
pos += 1
i += 1
else:
tmp[pos] = a[j]
pos += 1
j += 1
for i in range(end):
a[i] = tmp[i]
a = list("0123456789")
mergesort(a)
print(a)
a = [int(x) for x in a]
mergesort(a)
print(a)<jupyter_output>['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
<jupyter_text>### Komplexitätsanalyse
Da Mergesort ein rekursiver Algorithmus, können wir die Anzahl der Vergleiche nur mit einer Rekursionsformel herleiten.
Die Anzahl der Vergleiche für die Eingabegröße N setzt sich die Anzahl der Vergleich zusammen aus Kosten linke Seite
sortieren & rechte Seite sortieren und beide Seiten zusammenfügen. Je nachdem wie oft wir im merge-Durchlauf die obere
Schleife ausführen müssen ändert sich der Anteil des mergens. Im optimalen Fall sind beide Listen so sortiert, dass erst
die eine komplett zum Ergebnis hinzugefügt werden kann und dann die andere, somit brauchen wir $\frac{1}{2}N$ Vergleiche.
Im worst case müssen wir in der oberen Schleife N Vergleiche durchführen (die Bedingungen werden abwechselnd wahr).
#### Best case
$ T(N) = T(\lfloor\frac{1}{2}N\rfloor) + T(\lceil\frac{1}{2}N\rceil) + \frac{N}{2}$
Wir treffen nun die Annahme dass die beiden Hälften genau gleichgroß sind und das Abrunden/Aufrunden nicht nötig sind.
Dadurch vereinfacht sich die Formel zu:
$ T(N) = 2 * T(\frac{1}{2}) + \frac{N}{2}$
In der nächsten Schicht des Baumes / für 2 * $T(\frac{1}{2})$ hat jeder Aufruf von merge_sort nur noch ein Viertel der
Gesamteingabeliste, dafür gibt es vier Aufrufe und einen weiteren merge.
$ T(N) = 4 * T(\frac{1}{4}) + \frac{N}{2} + \frac{N}{2}$
$ T(N) = 8 * T(\frac{1}{8}) + \frac{N}{2} + \frac{N}{2} + \frac{N}{2}$
...
$ T(N) = c * T(\frac{1}{c}) + \underbrace{\frac{N}{2} + ... + \frac{N}{2}}_{m-Mal}$
m ist hierbei die Tiefe des Baums - wie wir nch sehen werden begträgt diese $m = \log_{2}{N}$
Somit ergibt sich für den hinteren Teil der Formel $\frac{1}{2} N \log_{2}{N}$, die linke Seite wird zu 0, da in der letzten
Schicht nur noch aus den ein-elementigen Listen besteht, dafür sind keine Vergleiche notwendig.
$T(N) = c * \underbrace{T(\frac{1}{c})}_{= 0} + \underbrace{\frac{N}{2} + ... + \frac{N}{2}}_{\frac{1}{2} N \log_{2}{N}}$
$T(N) = O(\frac{1}{2} N \log_{2}{N}) = O(N \log_{2}{N})$
#### Worst case
Das einzige was sich hier ändert ist der Summand der durch merge immer wieder dazu kommt. Somit fällt nur der Vorfaktor
$\frac{1}{2}$ weg. Also selbst im worst case hat Mergesort eine Komplexität von $O(N \log_{2}{N})$<jupyter_code>def mergesort_visual(a):
from time import sleep
print("mergesort", ''.join(a))
N = len(a)
if N <= 1:
sleep(2)
return a
else:
left, right = a[0 : N//2], a[N//2 : N]
left_sorted, right_sorted = mergesort_visual(left), mergesort_visual(right)
sleep(2)
return merge_visual(left_sorted, right_sorted)
def merge_visual(left, right):
from time import sleep
print("merge", ''.join(left), ''.join(right))
res = []
l, r = 0, 0
while l < len(left) and r < len(right):
if left[l] < right[r]:
res.append(left[l])
l += 1
else:
res.append(right[r])
r += 1
while l < len(left):
res.append(left[l])
l += 1
while r < len(right):
res.append(right[r])
r += 1
sleep(1)
return res
sort(mergesort, mergesort_visual, visual = False)
print()
sort(mergesort, mergesort_visual, sentence = "0123456789", visual = False)
print()
sort(mergesort, mergesort_visual, sentence = "9876543210", visual = False)
print()
sort(mergesort, mergesort_visual, sentence = "0193284765", visual = False)<jupyter_output>Before sort: "sorting"
After sort : "ginorst"
Before sort: "0123456789"
After sort : "0123456789"
Before sort: "9876543210"
After sort : "0123456789"
Before sort: "0193284765"
After sort : "0123456789"
<jupyter_text>## QuicksortIch weiß leider nicht, ob das die Implementierung ist, die ihr in der Vorlesung sehen werdet, so habe ich sie kennengelerent, das Prinzip bleibt ja aber das selbe.
Wir haben 3 Funktionen, wobei quicksort nur quicksort_impl mit dem gesamten Bereich der Liste aufruft.
Der Grundablauf von Quicksort sieht man in quicksort_impl, diese Funktion sortiert die Liste im übergebenen
Bereich l bis r, wobei l und r inklusive sind. Sollte r <= l sein, so hat der Listenabschnitt max 1 Element und wie bei
Mergesort gilt dann, dass die Liste schon sortiert ist.
Falls r > l gibt es noch Elemente zu sortieren: partition sorgt dafür, dass links des Pivot alle Elemente kleiner
sind als dieses und rechts von ihm alle größer. Danach wird zuerst die linke Teilliste sortiert und danach die rechte.
Das ist der Hauptunterschied zu Mergesort, Mergesort führt erst die Rekursion aus und ruft dann eine Hilfsfunktion auf
und Quicksort nutzt erst die Hilfsfunktion und führt dann die Rekursion durch.
Nun aber zum Herzstück von Quicksort partition. Zuerst wird das Pivotelement gewählt, dies ist hier willkürlich der
rechte Rand (normalerweise versucht man das Pivot eher random zu wählen). Die Wahl eines guten Pivots kann den Algorithmus
maßgeblich beschleunigen. Als nächstes werden zwei Laufvariablen definiert die einmal von links und einmal von rechts erstellt. Wir suchen dann von links das Element, welches größer ist als unser pivot und von rechts das, was kleiner
ist. Diese müssen wir dann vertauschen. Wenn i und j sich durchkreuzt haben sind wir fertig, wir müssen nur noch
das Pivot an seine endgültige Position bewegen, da wir danach nur noch die Liste links und die rechts vom Pivot sortieren.
Am Ende geben wir die Position des Pivotelements zurück um die nächsten Teillisten zu bestimmen.<jupyter_code>def quicksort(a):
quicksort_impl(a, 0, len(a) - 1)
def quicksort_impl(a, l, r):
if r > l: # if r <= l everything is sorted -> list has zero to one elements
i = partition(a, l, r)
quicksort_impl(a, l, i - 1)
quicksort_impl(a, i + 1, r)
def partition(a, l, r):
"""
This is the core of quicksort - a good pivot element can
speed up the whole process, a bad one will slow it down.
Here we will sort everything according to the selected pivot.
Everything left of the pivot should be less of the pivor,
everything on the right bigger.
"""
pivot = a[l] # pivot is now the right end - in most cases this is not perfect
i, j = l + 1, r
while True:
# search from left. find first element bigger than pivot
while i < r and a[i] <= pivot:
i += 1
# search from right. find first element smaller than pivot
while j > l and a[j] >= pivot:
j -= 1
if j <= i: # index of the left side is now on the right side
break # everything on the left side is smaller and on the
# right side bigger than pivot
a[i], a[j] = a[j], a[i] # swap out of order elements
# because of a[i] >= pivot must the final position of a[i] on the rightside
# of the pivot, and because a[j] <= pivot and i >= j is i the final position
# of the pivot element
#a[r] = a[i]
#a[i] = pivot # move pivot to its position
a[l], a[j] = a[j], a[l]
return j
a = list("465442728718")
quicksort(a)
print(a)
<jupyter_output>['1', '2', '2', '4', '4', '4', '5', '6', '7', '7', '8', '8']
<jupyter_text>### Komplexitätsanalyse
Wie oben schon erwähnt, kann die Wahl des Pivots entscheidend sein, der worst case ist wenn das Pivot nach partition
immer am Rand liegt, der best case, wenn es exakt in der Mitte liegt.
### Worst case
Wenn das Pivot nach partition immer am Rand liegt haben wir eine leere Teilliste und eine mit $N - 1$ Elementen, dazu
kommen die bisherigen N Vergleiche um die ganze Liste zu scannen und einen extra, weil die Laufvariablen sich überholen.
$T(N) = \underbrace{T(0)}_{=0} + \underbrace{T(N - 1)}_{=T(N - 2) + N} + N + 1 $
Auch hier werden wir wieder auf eine Summe geführt:
$T(N) = 1 + ... + (N + 1) = \frac{(N + 1) * (N + 2)}{2} = O(N^{2}) $
Damit ist Quicksort im worstcase langsamer als Insertionsort, obwohl sie in der selben Komplexitätsklasse liegen.
### Best case
Im best case teilt partition die Liste in zwei gleich große Hälften, es ergibt sich somit:
$T(N) = 2 * T(\frac{N}{2}) + N + 1$
Wir sehen wir kommen auf eine ähnliche Formel wie bereits bei Mergesort, es ergibt sich somit, dass Quicksort im best case
auch in $O(N \log_{2}{N})$ liegt - Quicksort ist zu dem in-place, da immer die originale Liste modifiziert wird.<jupyter_code>def quicksort_visual(a):
quicksort_impl_visual(a, 0, len(a) - 1)
def quicksort_impl_visual(a, l, r):
print("quicksort", l, r)
from time import sleep
if r > l: # if r <= l everything is sorted -> list has zero to one elements
k = partition_visual(a, l, r)
sleep(1)
quicksort_impl_visual(a, l, k - 1) # sort left side of the pivot
sleep(1)
quicksort_impl_visual(a, k + 1, r) # sort right side of the pivot
sleep(1)
def partition_visual(a, l, r):
from time import sleep
pivot = a[r]
i, j = l, r - 1
while True:
while i < r and a[i] <= pivot:
i += 1
while j > l and a[j] >= pivot:
j -= 1
print(' ' + ''.join(a) + ' ')
s = list(' ' + ' ' * len(a) + ' ')
s[l + 3], s[r + 3] = '|', '|'
s[i + 3], s[j + 3] = 'i', 'j'
print(''.join(s))
if i < j:
a[i], a[j] = a[j], a[i]
else:
break
sleep(1)
a[r] = a[i]
a[i] = pivot
return i
sort(quicksort, quicksort_visual, visual = False)
print()
sort(quicksort, quicksort_visual, sentence = "0123456789", visual = False)
print()
sort(quicksort, quicksort_visual, sentence = "9876543210", visual = False)
print()
sort(quicksort, quicksort_visual, sentence = "0193284765", visual = True)<jupyter_output>Before sort: "sorting"
After sort : "ginorst"
Before sort: "0123456789"
After sort : "0123456789"
Before sort: "9876543210"
After sort : "0123456789"
Before sort: "0193284765"
quicksort 0 9
0193284765
| i j |
0143289765
| ji |
quicksort 0 4
0143259768
|ji |
quicksort 0 1
0123459768
ji
quicksort 0 0
quicksort 2 1
quicksort 3 4
0123459768
ji
quicksort 3 3
quicksort 5 4
quicksort 6 9
0123459768
i j|
0123456798
|ji|
quicksort 6 7
0123456789
ji
quicksort 6 6
quicksort 8 7
quicksort 9 9
After sort : "0123456789"
|
permissive
|
/AlDa/blatt3/sorting_algorithms.ipynb
|
lyubadimitrova/cl-classes
| 11 |
<jupyter_start><jupyter_text># Spark Basics 2## ChainingWe can **chain** transformations and aaction to create a computation **pipeline**Suppose we want to compute the sum of the squares
$$ \sum_{i=1}^n x_i^2 $$
where the elements $x_i$ are stored in an RDD.<jupyter_code>#start the SparkContext
import findspark
findspark.init()
from pyspark import SparkContext
sc = SparkContext(master="local[4]")
print(sc)<jupyter_output><SparkContext master=local[4] appName=pyspark-shell>
<jupyter_text>### Create an RDD<jupyter_code>B=sc.parallelize(range(4))
B.collect()<jupyter_output><empty_output><jupyter_text>### Sequential syntax for chaining
Perform assignment after each computation<jupyter_code>Squares=B.map(lambda x:x*x)
Squares.reduce(lambda x,y:x+y) <jupyter_output><empty_output><jupyter_text>### Cascaded syntax for chaining
Combine computations into a single cascaded command<jupyter_code>B.map(lambda x:x*x)\
.reduce(lambda x,y:x+y)<jupyter_output><empty_output><jupyter_text>### Both syntaxes mean exactly the same thing
The only difference:
* In the sequential syntax the intermediate RDD has a name `Squares`
* In the cascaded syntax the intermediate RDD is *anonymous*
The execution is identical!### Sequential execution
The standard way that the map and reduce are executed is
* perform the map
* store the resulting RDD in memory
* perform the reduce### Disadvantages of Sequential execution
1. Intermediate result (`Squares`) requires memory space.
2. Two scans of memory (of `B`, then of `Squares`) - double the cache-misses.### Pipelined execution
Perform the whole computation in a single pass. For each element of **`B`**
1. Compute the square
2. Enter the square as input to the `reduce` operation.### Advantages of Pipelined execution
1. Less memory required - intermediate result is not stored.
2. Faster - only one pass through the Input RDD.### Lazy Evaluation
This type of pipelined evaluation is related to **Lazy Evaluation**. The word **Lazy** is used because the first command (computing the square) is not executed immediately. Instead, the execution is delayed as long as possible so that several commands are executed in a single pass.
The delayed commands are organized in an **Execution plan**For more on Pipelined execution, Lazy evaluation and Execution Plans see [spark programming guide/RDD operations](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-operations)### An instructive mistake
Here is another way to compute the sum of the squares using a single reduce command. Can you figure out how it comes up with this unexpected result?<jupyter_code>C=sc.parallelize([1,1,2])
C.reduce(lambda x,y: x*x+y*y)<jupyter_output><empty_output><jupyter_text>#### Answer:
1. `reduce` first operates on the pair $(1,1)$, replacing it with $1^2+1^2 = 2$
2. `reduce` then operates on the pair $(2,2)$, giving the final result $2^2+2^2=8$ ## getting information about an RDD
RDD's typically have hundreds of thousands of elements. It usually makes no sense to print out the content of a whole RDD. Here are some ways to get manageable amounts of information about an RDDCreate an RDD of length **`n`** which is a repetition of the pattern `1,2,3,4`<jupyter_code>n=1000000
B=sc.parallelize([1,2,3,4]*int(n/4))
#find the number of elements in the RDD
B.count()
# get the first few elements of an RDD
print('first element=',B.first())
print('first 5 elements = ',B.take(5))<jupyter_output>first element= 1
first 5 elements = [1, 2, 3, 4, 1]
<jupyter_text>### Sampling an RDD
* RDDs are often very large.
* Aggregates, such as averages, can be approximated efficiently by using a sample.
* Sampling is done in parallel and requires limited computation.The method `RDD.sample(withReplacement,p)` generates a sample of the elements of the RDD. where
- `withReplacement` is a boolean flag indicating whether or not a an element in the RDD can be sampled more than once.
- `p` is the probability of accepting each element into the sample. Note that as the sampling is performed independently in each partition, the number of elements in the sample changes from sample to sample.<jupyter_code># get a sample whose expected size is m
# Note that the size of the sample is different in different runs
m=5.
print('sample1=',B.sample(False,m/n).collect())
print('sample2=',B.sample(False,m/n).collect())<jupyter_output>sample1= [1, 4, 1, 4, 3, 1, 3]
sample2= [1, 1, 4, 4, 1, 1, 4]
<jupyter_text>### Things to note and think about
* Each time you run the previous cell, you get a different estimate
* The accuracy of the estimate is determined by the size of the sample $n*p$
* See how the error changes as you vary $p$
* Can you give a formula that relates the variance of the estimate to $(p*n)$ ? (The answer is in the Probability and statistics course).### filtering an RDD
The method `RDD.filter(func)` Return a new dataset formed by selecting those elements of the source on which func returns true.
<jupyter_code>print('the number of elements in B that are > 3 =',B.filter(lambda n: n > 3).count())<jupyter_output>the number of elements in B that are > 3 = 250000
<jupyter_text>### Removing duplicate elements from an RDD
The method `RDD.distinct()` Returns a new dataset that contains the distinct elements of the source dataset.
This operation requires a **shuffle** in order to detect duplication across partitions.<jupyter_code># Remove duplicate element in DuplicateRDD, we get distinct RDD
DuplicateRDD = sc.parallelize([1,1,2,2,3,3])
print('DuplicateRDD=',DuplicateRDD.collect())
print('DistinctRDD = ',DuplicateRDD.distinct().collect())<jupyter_output>DuplicateRDD= [1, 1, 2, 2, 3, 3]
DistinctRDD = [1, 2, 3]
<jupyter_text>### flatmap an RDD
The method `RDD.flatMap(func)` is similar to map, but each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item).<jupyter_code>text=["you are my sunshine","my only sunshine"]
text_file = sc.parallelize(text)
# map each line in text to a list of words
print('map:',text_file.map(lambda line: line.split(" ")).collect())
# create a single list of words by combining the words from all of the lines
print('flatmap:',text_file.flatMap(lambda line: line.split(" ")).collect())<jupyter_output>map: [['you', 'are', 'my', 'sunshine'], ['my', 'only', 'sunshine']]
flatmap: ['you', 'are', 'my', 'sunshine', 'my', 'only', 'sunshine']
<jupyter_text>### Set operations
In this part, we explore set operations including **union**,**intersection**,**subtract**, **cartesian** in pyspark<jupyter_code>rdd1 = sc.parallelize([1, 1, 2, 3])
rdd2 = sc.parallelize([1, 3, 4, 5])<jupyter_output><empty_output><jupyter_text>1. union(other)
* Return the union of this RDD and another one.
* Note that that repetitions are allowed. The RDDs are **bags** not **sets**
* To make the result a set, use `.distinct`<jupyter_code>rdd2=sc.parallelize(['a','b',1])
print('rdd1=',rdd1.collect())
print('rdd2=',rdd2.collect())
print('union as bags =',rdd1.union(rdd2).collect())
print('union as sets =',rdd1.union(rdd2).distinct().collect())<jupyter_output>rdd1= [1, 1, 2, 3]
rdd2= ['a', 'b', 1]
union as bags = [1, 1, 2, 3, 'a', 'b', 1]
union as sets = [1, 'a', 2, 3, 'b']
<jupyter_text>2. intersection(other)
* Return the intersection of this RDD and another one. The output will not contain any duplicate elements, even if the input RDDs did.Note that this method performs a shuffle internally.<jupyter_code>rdd2=sc.parallelize([1,1,2,5])
print('rdd1=',rdd1.collect())
print('rdd2=',rdd2.collect())
print('intersection=',rdd1.intersection(rdd2).collect())<jupyter_output>rdd1= [1, 1, 2, 3]
rdd2= [1, 1, 2, 5]
intersection= [1, 2]
<jupyter_text>3. subtract(other, numPartitions=None)
* Return each value in self that is not contained in other.<jupyter_code>print('rdd1=',rdd1.collect())
print('rdd2=',rdd2.collect())
print('rdd1.subtract(rdd2)=',rdd1.subtract(rdd2).collect())<jupyter_output>rdd1= [1, 1, 2, 3]
rdd2= ['a', 'b', 1]
rdd1.subtract(rdd2)= [2, 3]
<jupyter_text>4. cartesian(other)
* Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where **a** is in **self** and **b** is in **other**.<jupyter_code>rdd2=sc.parallelize([1,1,2])
rdd2=sc.parallelize(['a','b'])
print('rdd1=',rdd1.collect())
print('rdd2=',rdd2.collect())
print('rdd1.cartesian(rdd2)=\n',rdd1.cartesian(rdd2).collect())<jupyter_output>rdd1= [1, 1, 2, 3]
rdd2= ['a', 'b']
rdd1.cartesian(rdd2)=
[(1, 'a'), (1, 'b'), (1, 'a'), (1, 'b'), (2, 'a'), (2, 'b'), (3, 'a'), (3, 'b')]
|
no_license
|
/EDX_Big_Data_Analytics_Using_Spark/Section1-Spark-Basics/1.BasicSpark/.ipynb_checkpoints/4 Spark Basics 2-checkpoint.ipynb
|
NecmettinCeylan/Py_Works
| 15 |
<jupyter_start><jupyter_text>### Training functions<jupyter_code>source("trainingFunctions.R")
buildBlendedModel <- function(train, test, p=.25, recruit_wgt=.5) {
blendedModel <- list()
test_wells <- unique(test$Well.Name)
# build a blended model for each well in the test data
for (well_i in test_wells) {
test_i <- test[test$Well.Name == well_i,]
test_iso <- max(test$Depth) - min(test$Depth)
# if test well has no PE log - remove PE features from training data
if (sum(is.na(test_i$PE_0)) > 0) {
train_i <- subset(train, select=-c(PE_n15, PE_n14, PE_n13, PE_n12, PE_n11, PE_n10, PE_n9, PE_n8, PE_n7, PE_n6, PE_n5,
PE_n4, PE_n3, PE_n2, PE_n1, PE_0, PE_1, PE_2, PE_3, PE_4, PE_5, PE_6, PE_7, PE_8,
PE_9, PE_10, PE_11, PE_12, PE_13, PE_14, PE_15))
} else {
train_i <- train
}
# train and weight models
blendedModel[[well_i]][["fits"]] <- trainBlendedModel(train_i)
blendedModel[[well_i]][["weights"]] <- weightBlendedModel(train_i, test_iso, p, recruit_wgt)
}
blendedModel
}<jupyter_output><empty_output><jupyter_text>### Evaluation functions<jupyter_code>source("evaluationFunctions.R")
predictBlendedModel <- function(test,
blendedModel,
classes=c("SS", "CSiS", "FSiS", "SiSh", "MS", "WS", "D", "PS", "BS")
) {
testPrime <- data.frame()
test_wells <- unique(test$Well.Name)
for (well_i in test_wells) {
test_i <- test[test$Well.Name == well_i,]
votes <- tallyVotes(test_i, blendedModel[[well_i]], classes)
test_i$Predicted <- electClass(test_i, votes)
testPrime <- rbind(testPrime, test_i)
}
testPrime
}<jupyter_output><empty_output><jupyter_text>### Tuning blending parameters <jupyter_code>source("accuracyMetrics.R")
options(warn=-1)
t0 <- Sys.time()
train <- lag[lag$Well.Name != "SHRIMPLIN" & lag$Well.Name != "CHURCHMAN BIBLE",]
test <- lag[lag$Well.Name == "SHRIMPLIN" | lag$Well.Name == "CHURCHMAN BIBLE",]
ps <- c(.5)
rws <- c(.5)
for (p in ps) {
for (rw in rws) {
blendedModel <- buildBlendedModel(train, test, p, rw)
testPrime <- predictBlendedModel(test, blendedModel)
f1 <- myF1Metric(testPrime$Predicted, testPrime$Facies)
print(paste("inv dist parameter p:", p, ", recruit weight:", rw, ", f1-score", round(f1,4)))
print("-------------")
}
}
tn <- Sys.time()
print(tn-t0)<jupyter_output>Loading required package: randomForest
randomForest 4.6-12
Type rfNews() to see new features/changes/bug fixes.
Attaching package: 'randomForest'
The following object is masked from 'package:ggplot2':
margin
<jupyter_text>### Cross-validation <jupyter_code>source("accuracyMetrics.R")
t0 <- Sys.time()
f1 <- NULL
wells <- unique(lag$Well.Name)
wells <- wells[!wells %in% "Recruit F9"]
for (i in 1:(length(wells)-1)) {
for (j in (i+1):length(wells)) {
trainIndex <- lag$Well.Name != wells[i] & lag$Well.Name != wells[j]
train <- lag[trainIndex,]
test <- lag[!trainIndex,]
blendedModel <- buildBlendedModel(train, test)
testPrime <- predictBlendedModel(test, blendedModel)
f1_i <- myF1Metric(testPrime$Predicted, testPrime$Facies)
f1 <- c(f1, f1_i)
print(paste("Test well 1:", wells[i], ", Test well 2:", wells[j], ", f1-score:", f1_i))
print("-------------")
}
}
print(paste("Minimum F1:", min(f1)))
print(paste("Average F1:", mean(f1)))
print(paste("Maximum F1:", max(f1)))
tn <- Sys.time()
print(tn-t0)<jupyter_output>[1] "Test well 1: SHRIMPLIN , Test well 2: ALEXANDER D , f1-score: 0.644444444444444"
[1] "-------------"
[1] "Test well 1: SHRIMPLIN , Test well 2: SHANKLE , f1-score: 0.616783216783217"
[1] "-------------"
[1] "Test well 1: SHRIMPLIN , Test well 2: LUKE G U , f1-score: 0.603351955307263"
[1] "-------------"
[1] "Test well 1: SHRIMPLIN , Test well 2: KIMZEY A , f1-score: 0.678474114441417"
[1] "-------------"
[1] "Test well 1: SHRIMPLIN , Test well 2: CROSS H CATTLE , f1-score: 0.674715909090909"
[1] "-------------"
[1] "Test well 1: SHRIMPLIN , Test well 2: NOLAN , f1-score: 0.573226544622426"
[1] "-------------"
[1] "Test well 1: SHRIMPLIN , Test well 2: NEWBY , f1-score: 0.520763187429854"
[1] "-------------"
[1] "Test well 1: SHRIMPLIN , Test well 2: CHURCHMAN BIBLE , f1-score: 0.63001485884101"
[1] "-------------"
[1] "Test well 1: ALEXANDER D , Test well 2: SHANKLE , f1-score: 0.625"
[1] "-------------"
[1] "Test well 1: ALEXANDER D , Test well 2: LUKE G U , f1-score: 0.65349887[...]
|
permissive
|
/jpoirier/archive/jpoirier006.ipynb
|
yohanesnuwara/2016-ml-contest
| 4 |
<jupyter_start><jupyter_text># * **InstaBot** - Part 2*#### In the following cell, I have
1. Imported necessary libraries<jupyter_code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import numpy as np
import pandas as pd
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
from selenium.common.exceptions import NoSuchElementException,StaleElementReferenceException
import matplotlib.pyplot as plt
import re<jupyter_output><empty_output><jupyter_text>#### In the two following cells, I have
1. Connected to chrome webdriver object
2. stored instagram url in a variable <jupyter_code>driver = webdriver.Chrome(executable_path="D:\Softwares\Selenium\chromedriver")
url = 'https://www.instagram.com/'<jupyter_output><empty_output><jupyter_text>#### In the following cell, I have
1. defined a function to go to the Instagram home page<jupyter_code>def goHomeInsta():
url = 'https://www.instagram.com/'
driver.get(url)
goHomeInsta()<jupyter_output><empty_output><jupyter_text>#### In the two following cells, I have
1. defined a function to login into Instagram
2. defined a function to logout of Instagram<jupyter_code>def login(paraUsername, paraPassword):
username = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='username'")))
password = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='password'")))
username.clear()
password.clear()
username.send_keys(paraUsername)
password.send_keys(paraPassword)
loginBtn = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[type = 'submit']"))).click()
saveLoginInfoNotNowBtn = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(text(), 'Not Now')]"))).click()
notificationNotNowBtn = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(text(), 'Not Now')]"))).click()
print("Login Successful")
def logout():
time.sleep(2)
profileUrl = f'https://www.instagram.com/{username}/'
driver.get(profileUrl)
profileIconBtn = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,'//div[@class = "Fifk5"]/span'))).click()
logoutBtn = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="react-root"]/section/nav/div[2]/div/div/div[3]/div/div[5]/div[2]/div[2]/div[2]/div[2]'))).click()
print("Logout Successful")<jupyter_output><empty_output><jupyter_text>#### Login<jupyter_code>username = 'nikhstered'
password = 'identitytheftisnotajokejim'
login(username ,password)<jupyter_output>Login Successful
<jupyter_text>## Task 1 : Now your friend has followed a lot of different food bloggers, he needs to analyse the habits of these bloggers.
1.1 From the list of instagram handles you obtained when you searched ‘food’ in previous project. Open the first 10 handles and find the top 5 which have the highest number of followers
1.2 Now Find the number of posts these handles have done in the previous 3 days.
1.3 Depict this information using a suitable graph.### 1.1 From the list of instagram handles you obtained when you searched ‘food’ in previous project. Open the first 10 handles and find the top 5 which have the highest number of followers#### In the following cell, I have
1. defined a function to get all the profiles shown as a search result for the specific search keyword 'food'<jupyter_code>def getInstaHandles(keyword):
searchInput = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, '//input[@placeholder = "Search"]')))
searchInput.clear()
time.sleep(2)
searchInput.send_keys(keyword)
element = WebDriverWait(driver,10).until(EC.presence_of_element_located((By.XPATH, '//a[@class = "-qQT3"]')))
handlesWebElements = driver.find_elements_by_xpath('//a[@class = "-qQT3"]')
handles = []
try:
for i in handlesWebElements:
soup = BeautifulSoup(i.get_attribute('outerHTML'),'html.parser')
handle = soup.find(class_ = '_7UhW9').text
if handle[0] != "#":
handles.append(handle)
except StaleElementReferenceException:
print('Please check your keyword')
return handles<jupyter_output><empty_output><jupyter_text>#### In the following cell, I have
1. defined a function to open the specifed profile page <jupyter_code>def openProfile(handle):
try:
driver.get(url + handle)
except NoSuchElementException:
print('No Such Instagram Profile found')
return False<jupyter_output><empty_output><jupyter_text>#### In the following cell, I have
1. defined a function to
1.1 open a profile page using the above function( openProfile(handle))
1.2 get the followers count of that page<jupyter_code>def profileFollowersCount(handle):
openProfile(handle)
driver.refresh()
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH, "//span[@class = 'g47SY ']")))
countOfFollowers = int(driver.find_elements_by_xpath("//span[@class = 'g47SY ']")[1].get_attribute('title').replace(',',''))
#print(countOfFollowers)
return countOfFollowers<jupyter_output><empty_output><jupyter_text>#### In the following cell, I have
1. stored the keyword 'food'
2. created an empty list
3. iterated over the first 10 profiles having keyword
3.1 stored the followers count returned by the function(profileFollowersCount())
3.2 appended this count to the list along with its handle<jupyter_code>keyword = 'food'
foodProfilesAndFollowers = []
for ihandle in getInstaHandles(keyword)[0:10]:
followers = profileFollowersCount(ihandle)
foodProfilesAndFollowers.append([ihandle,followers])
foodProfilesAndFollowers<jupyter_output><empty_output><jupyter_text>#### In the following cell, I have
1. created a pandas dataframe and appened the list into the dataframe
2. sorted it based on followers count in descending order and printed the top 5 rows<jupyter_code>print(f"Top 5 amoung the first 10 profiles for the keyword '{keyword}' are :")
foodProfilesDF = pd.DataFrame(foodProfilesAndFollowers, columns=["Profile Name", "No of Followers"])
top5foodProfilesDF = foodProfilesDF.sort_values(by=['No of Followers'],ascending=False,ignore_index=True)[0:5]
top5foodProfilesDF<jupyter_output>Top 5 amoung the first 10 profiles for the keyword 'food' are :
<jupyter_text>#### In the following cell, I have
1. plotted the above data into a pie chart<jupyter_code>plt.pie(top5foodProfilesDF['No of Followers'],labels=top5foodProfilesDF['Profile Name'],labeldistance= 1.05, radius=1.2,autopct="%.2f",pctdistance=.8)
plt.style.use("ggplot")
plt.title("Top 5 profiles with most no of followers")
plt.show()<jupyter_output><empty_output><jupyter_text>### 1.2 Now Find the number of posts these handles have done in the previous 3 days.#### In the following cell, I have
1. defined a function to retrive the count of total posts by each of the top 5 profiles posted in the past 3 days<jupyter_code>def postsCountForNDays(handle, noOfDaysConsidered):
openProfile(handle)
driver.refresh()
time.sleep(1)
WebDriverWait(driver,30).until(EC.element_to_be_clickable((By.CLASS_NAME, 'kIKUG'))).click()
count = 0
while True:
time.sleep(1)
WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, '//time[contains(@class, "Nzb55")]')))
timeData = driver.find_element_by_xpath('//time[contains(@class, "Nzb55")]').get_attribute('innerHTML')
if timeData[-1] in ["h","s","m"]:
count+=1
elif timeData[-1] in ["d"] and int(timeData[:-1]) <= 3 :
count+=1
else:
break
try:
nextPost = driver.find_element_by_class_name('coreSpriteRightPaginationArrow')
nextPost.click()
except:
break
print(f"{handle} has {count} post(s) in past 3 days")
return handle, count
<jupyter_output><empty_output><jupyter_text>#### In the following cell, I have
1. created an empty list
2. initialed a variable with the number of days to be considered (here 3, as per question)
3. iterated over the top 5 profiles
3.1 called the above function(postsCountForNDays) to retrive number of posts by each users
3.2 appended to the list<jupyter_code>foodProfilesAndPostsCount = []
noOfDaysConsidered = 3
for index , row in top5foodProfilesDF.iterrows():
handle, count = postsCountForNDays(row['Profile Name'],noOfDaysConsidered)
foodProfilesAndPostsCount.append([handle, count])<jupyter_output>foodhunter_sabu has 2 post(s) in past 3 days
cochinfoodalert has 0 post(s) in past 3 days
kerala.food.diaries has 1 post(s) in past 3 days
salmanthefoodie has 2 post(s) in past 3 days
theindianfoodblogger has 1 post(s) in past 3 days
<jupyter_text>#### In the following cell, I have
1. stored the above list as a pandas dataframe<jupyter_code>postsIn3DaysDF = pd.DataFrame(foodProfilesAndPostsCount, columns = ['Profile Name','Number of posts in past 3 days'])
postsIn3DaysDF<jupyter_output><empty_output><jupyter_text>### 1.3 Depict this information using a suitable graph.#### In the following cell, I have
1. plotted a bar graph and a pie chart with the above data<jupyter_code>plt.bar(postsIn3DaysDF['Profile Name'],postsIn3DaysDF['Number of posts in past 3 days'],color = 'orange')
plt.xlabel('Profile Name')
plt.ylabel('Number of posts in past 3 days')
plt.xticks(rotation = 90)
plt.grid()
plt.show()
plt.pie(postsIn3DaysDF['Number of posts in past 3 days'],labels=postsIn3DaysDF['Profile Name'],labeldistance= 1.05, radius=1.2,autopct="%.2f",pctdistance=.8)
plt.style.use("ggplot")
plt.title("Number of posts in 3 days and their respective profiles")
plt.show()<jupyter_output><empty_output><jupyter_text>## Task 2 : Your friend also needs a list of hashtags that he should use in his posts.
2.1 Open the 5 handles you obtained in the last question, and scrape the content of the first 10 posts of each handle.
2.2 Prepare a list of all words used in all the scraped posts and calculate the frequency of each word.
2.3 Create a csv file with two columns : the word and its frequency
2.4 Now, find the hashtags that were most popular among these bloggers
2.5 Plot a Pie Chart of the top 5 hashtags obtained and the number of times they were used by these bloggers in the scraped posts.### 2.1 Open the 5 handles you obtained in the last question, and scrape the content of the first 10 posts of each handle.#### In the following cell, I have
1. defined a function to get the captions of a profile.
1.1 number of posts to consider for retriving the captions are given as an arguement<jupyter_code>def postScraper(handle, noOfPostsConsidered):
openProfile(handle)
driver.refresh()
time.sleep(1)
postNum = 1
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CLASS_NAME, 'kIKUG'))).click()
funcpostsAndCaptionList = []
while postNum <= noOfPostsConsidered:
time.sleep(1)
WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, '//div[contains(@class, "C4VMK")]/span')))
captionData = driver.find_element_by_xpath('//div[contains(@class, "C4VMK")]/span').text
print(f"{handle}'s Post {postNum} scrapped. ")
funcpostsAndCaptionList.append([handle, postNum, captionData])
try:
nextPost = driver.find_element_by_class_name('coreSpriteRightPaginationArrow')
nextPost.click()
postNum+=1
except:
break
return funcpostsAndCaptionList
<jupyter_output><empty_output><jupyter_text>#### In the two following cell, I have
1. given the value for number of posts to be considered,n
2. created an empty list 1
3. iterated over the top 5 profiles
3.1 stored the n captions for that profile in a list 2
3.2 iterated over this list 2 elements one by one and appended them to list 1
4. went back to home page of instagram
5. stored this list 1 data to a pandas dataframe<jupyter_code>noOfPostsConsidered = 10
postsAndCaptionsList = []
for index , row in top5foodProfilesDF.iterrows():
funcpostsAndCaptionList = postScraper(row['Profile Name'], noOfPostsConsidered)
for i in funcpostsAndCaptionList:
postsAndCaptionsList.append(i)
print("***********************************************************")
goHomeInsta()
postCaptionsDF = pd.DataFrame(postsAndCaptionsList, columns = ['Profile Name','Post Number', 'Post Caption'])
postCaptionsDF.head(50)<jupyter_output><empty_output><jupyter_text>### 2.2 Prepare a list of all words used in all the scraped posts and calculate the frequency of each word.#### In the following cell, I have
1. defined a function to check if a word is valid or not <jupyter_code>def isAllowedSpecificWord(word):
charRe = re.compile(r'[^a-zA-Z0-9.]')
word = charRe.search(word)
return not bool(word)<jupyter_output><empty_output><jupyter_text>#### In the three following cells, I have
1. iterated over every row of captions and then iterated over every valid word in each row
1.1 if valid the word is added to the dictionary and its count is updated
2. stored the above dictionary to a pandas dataframe
3. sorted the dataframe in the decending order of frequency.<jupyter_code>wordFreq = dict()
for index , row in postCaptionsDF.iterrows():
for word in row['Post Caption'].split():
if isAllowedSpecificWord(word):
if word in wordFreq:
wordFreq[word] += 1
else:
wordFreq[word] = 1
#print(wordFreq)
wordFreqDF = pd.DataFrame([{"Word":k, "Frequency":v} for k, v in wordFreq.items()])
wordFreqDF
wordFreqDF = wordFreqDF.sort_values(by=['Frequency'],ascending=False,ignore_index=True)
wordFreqDF.head(20)<jupyter_output><empty_output><jupyter_text>### 2.3 Create a csv file with two columns : the word and its frequency#### In the following cell, I have
1. stored the above data in dataframe to a csv file<jupyter_code>wordFreqDF.to_csv('Word Frequency in food blogs.csv',index=False)
wordFreqDF.head(5)<jupyter_output><empty_output><jupyter_text>### 2.4 Now, find the hashtags that were most popular among these bloggers#### In the three following cell, I have
1. created an empty dictionary
2. iterated over every row of postCaptions and then iterated over every valid word in each row
2.1 if the word begins with an '#' , added the word to dictionary and updated its count
3. stored the above dictionary to a pandas dataframe
4. sorted the dataframe in the decending order of frequency.<jupyter_code>hashTagFreq =dict()
for index , row in postCaptionsDF.iterrows():
for word in row['Post Caption'].split():
if word[0] == '#':
if word in hashTagFreq:
hashTagFreq[word] += 1
else:
hashTagFreq[word] = 1
#print(hashTagFreq)
hashTagFreqDF = pd.DataFrame([{"#Tag":k, "Frequency":v} for k, v in hashTagFreq.items()])
hashTagFreqDF
hashTagFreqDF = hashTagFreqDF.sort_values(by=['Frequency'],ascending=False,ignore_index=True)
hashTagFreqDF.head(10)<jupyter_output><empty_output><jupyter_text>### 2.5 Plot a Pie Chart of the top 5 hashtags obtained and the number of times they were used by these bloggers in the scraped posts.#### In the following two cells, I have
1. updated the dataframe with only the top 5 most frequent hashtags
2. plotted a pie chart for the above data<jupyter_code>hashTagFreqDF = hashTagFreqDF[0:5]
hashTagFreqDF.head()
plt.pie(hashTagFreqDF['Frequency'],labels = hashTagFreqDF['#Tag'],autopct="%.2f%%")
plt.title('Top 5 Hashtags')
plt.style.use("ggplot")
plt.show()<jupyter_output><empty_output><jupyter_text>## Task 3 : You need to also calculate average followers : likes ratio for the obtained handles.
Followers : Likes ratio is calculated as follows:
3.1 Find out the likes of the top 10 posts of the 5 handles obtained earlier.
3.2 Calculate the average likes for a handle.
3.3 Divide the average likes obtained from the number of followers of the handle to get the average followers:like ratio of each handle.
3.4 Create a bar graph to depict the above obtained information.### 3.1 Find out the likes of the top 10 posts of the 5 handles obtained earlier.#### In the following cell, I have
1. defined a function to scrape likes of n posts of a profile
*PS : I have skipped the post if its an **IGTV** video *<jupyter_code>def likeScraper(handle, noOfPostsConsidered):
openProfile(handle)
driver.refresh()
time.sleep(1)
postNum = 0
WebDriverWait(driver,30).until(EC.element_to_be_clickable((By.CLASS_NAME, 'kIKUG'))).click()
postLikes = []
while True:
driver.implicitly_wait(5)
try:
likeData=driver.find_element_by_xpath('//a[contains(@class,"zV_Nj")]/span')
if(len(likeData.text)>=1):
postNum += 1
likes = likeData.text.replace(',','')
print(f'{handle}\'s post {postNum} has {likes} likes. ')
likes = int(likes)
postLikes.append([handle, postNum, likes])
if postNum == noOfPostsConsidered:
break
except:
if postNum == noOfPostsConsidered:
break
else:
try:
next1 = driver.find_element_by_class_name('coreSpriteRightPaginationArrow')
next1.click()
except:
break
if postNum == noOfPostsConsidered:
break
try:
next1 = driver.find_element_by_class_name('coreSpriteRightPaginationArrow')
next1.click()
except:
break
return postLikes <jupyter_output><empty_output><jupyter_text>#### In the following two cell, I have
1. stored the value for number of posts to be considered
2. created an empty list 1
3. iterated over the top 5 profiles
3.1 called the function (likeScraper)and stored the result in a list 2
3.2 iterated over list 2 and appended each element to list 1
4. went back to instagram home page
5. stored this list 1 into a pandas dataframe<jupyter_code>noOfPostsConsidered = 10
postsAndLikesList = []
for index , row in top5foodProfilesDF.iterrows():
funcpostsAndLikesList = likeScraper(row['Profile Name'], noOfPostsConsidered)
for i in funcpostsAndLikesList:
postsAndLikesList.append(i)
print("***********************************************************")
goHomeInsta()
postsAndLikesDF = pd.DataFrame(postsAndLikesList, columns=['Profile Name','Post No.','Likes'])
postsAndLikesDF<jupyter_output><empty_output><jupyter_text>### 3.2 Calculate the average likes for a handle.#### In the following two cells, I have
1. grouped the dataframe by profile name and found mean
2. stored above result into a new dataframe<jupyter_code>dfAvgLikes = postsAndLikesDF.groupby(['Profile Name']).Likes.agg('mean').to_frame('Avg Likes').reset_index()
dfAvgLikes<jupyter_output><empty_output><jupyter_text>### 3.3 Divide the average likes obtained from the number of followers of the handle to get the average followers:like ratio of each handle.#### In the following cell, I have
1. merged 2 dataframes (top5foodProfilesDF, dfAvgLikes) on profile name
2. stored it into a new pandas dataframe df1
3. created a new column with value ( avg likes / number of followers)<jupyter_code>df1 = pd.merge(top5foodProfilesDF, dfAvgLikes, on='Profile Name')
df1['Avg followers:like ratio'] = df1['Avg Likes'] / df1['No of Followers']
df1<jupyter_output><empty_output><jupyter_text>### 3.4 Create a bar graph to depict the above obtained information.#### In the following two cell, I have
1. plotted a bar graph and a pie chart with above dataframe df1<jupyter_code>plt.bar(df1['Profile Name'],df1['Avg Likes'],color = '#fcab08')
plt.xticks(rotation = 90)
plt.xlabel('Profile Name')
plt.ylabel('Avg Likes')
plt.title('Profile vs Avg Likes')
plt.show()
plt.bar(df1['Profile Name'],df1['Avg followers:like ratio'],color = '#36ff6b')
plt.xticks(rotation = 90)
plt.xlabel('Profile Name')
plt.ylabel('Avg followers:like ratio')
plt.title('Profile vs Avg followers:like ratio')
plt.show()<jupyter_output><empty_output>
|
permissive
|
/InstaBot - 2.ipynb
|
nikhster/InstaBot
| 27 |
<jupyter_start><jupyter_text>## Titanic Dataset<jupyter_code>train=pd.read_csv('train.csv')
print train.shape
train.head()<jupyter_output>(891, 12)
<jupyter_text>## Preparing the Data<jupyter_code>train.apply(lambda x: x.isnull().sum())
train.dtypes
# lets format sex : (Male==1) & (Female==2)
train['Sex']=(train['Sex']=='male')*1
train.head()<jupyter_output><empty_output><jupyter_text>- *The column 'Cabin' contains 687(for a total of 891) NAN therefore we are going to drop it for the linear models and keep it for nonlinear model such as random trees*.
- notice that Age column got 177 missing. We have two option: replace missing values with mean or generate from normale distribution (age follow normal distribution!) !<jupyter_code>#lets delete cabin
del train['Cabin']
# lets the find out the reason behind age missing data
maskMiss=train.Age.isnull()
print np.mean(train.Sex[maskMiss])
print np.mean(train.Embarked[maskMiss]=='Q')
print np.mean(train.Embarked[maskMiss]=='C')
print np.mean(train.Embarked[maskMiss]=='S')
# lets fill the missing age data with 0.7meanMale+0.3 meanFemale
meanMale=np.mean(train.Age[train.Sex==1])
meanFemale=np.mean(train.Age[train.Sex==0])
print 'Male mean', meanMale
print 'Female mean',meanFemale
mean=0.7*meanMale+0.3*meanFemale
print 'mean',mean
#lets replace age miss
train.Age.fillna(mean,inplace=True)
# lets impute the 2 embarked miss values
train=train.dropna(subset=['Embarked'], how='all')
train.head()<jupyter_output><empty_output><jupyter_text># EDA<jupyter_code># Females survived more than men
sns.set_context(rc={"figure.figsize": (9, 5)})
sns.barplot(x="Sex", y="Survived", hue="Pclass", data=train);
# Females survived more than men
sns.set_context(rc={"figure.figsize": (10, 7)})
sns.barplot(x="Embarked", y="Survived", hue="Pclass", data=train);
sns.countplot(y="Embarked", hue="Pclass", data=train, palette="Greens_d");
sns.pointplot(x="Sex", y="Survived", hue="Pclass", data=train);<jupyter_output><empty_output><jupyter_text># Apply Machine Learning for prediction<jupyter_code>from matplotlib.colors import ListedColormap
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
def points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=True, colorscale=cmap_light, cdiscrete=cmap_bold, alpha=0.1, psize=10, zfunc=False, predicted=False):
h = .02
X=np.concatenate((Xtr, Xte))
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
#plt.figure(figsize=(10,6))
if zfunc:
p0 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0]
p1 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z=zfunc(p0, p1)
else:
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
ZZ = Z.reshape(xx.shape)
if mesh:
plt.pcolormesh(xx, yy, ZZ, cmap=cmap_light, alpha=alpha, axes=ax)
if predicted:
showtr = clf.predict(Xtr)
showte = clf.predict(Xte)
else:
showtr = ytr
showte = yte
ax.scatter(Xtr[:, 0], Xtr[:, 1], c=showtr-1, cmap=cmap_bold, s=psize, alpha=alpha,edgecolor="k")
# and testing points
ax.scatter(Xte[:, 0], Xte[:, 1], c=showte-1, cmap=cmap_bold, alpha=alpha, marker="s", s=psize+10)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
return ax,xx,yy
def points_plot_prob(ax, Xtr, Xte, ytr, yte, clf, colorscale=cmap_light, cdiscrete=cmap_bold, ccolor=cm, psize=10, alpha=0.1):
ax,xx,yy = points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=False, colorscale=colorscale, cdiscrete=cdiscrete, psize=psize, alpha=alpha, predicted=True)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=ccolor, alpha=.2, axes=ax)
cs2 = plt.contour(xx, yy, Z, cmap=ccolor, alpha=.6, axes=ax)
plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14, axes=ax)
return ax
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LogisticRegression
def cv_optimize(clf, parameters, X, y, n_jobs=1, n_folds=5, score_func=None):
if score_func:
gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds, n_jobs=n_jobs, scoring=score_func)
else:
gs = GridSearchCV(clf, param_grid=parameters, n_jobs=n_jobs, cv=n_folds)
gs.fit(X, y)
print "BEST", gs.best_params_, gs.best_score_, gs.grid_scores_
best = gs.best_estimator_
return best
def do_classify(clf, parameters, indf, featurenames, targetname, target1val, mask=None, reuse_split=None, score_func=None, n_folds=5, n_jobs=1,train_size=0.7):
subdf=indf[featurenames]
X=subdf.values
y=(indf[targetname].values==target1val)*1
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=train_size)
if mask !=None:
print "using mask"
Xtrain, Xtest, ytrain, ytest = X[mask], X[~mask], y[mask], y[~mask]
if reuse_split !=None:
print "using reuse split"
Xtrain, Xtest, ytrain, ytest = reuse_split['Xtrain'], reuse_split['Xtest'], reuse_split['ytrain'], reuse_split['ytest']
if parameters:
clf = cv_optimize(clf, parameters, Xtrain, ytrain, n_jobs=n_jobs, n_folds=n_folds, score_func=score_func)
clf=clf.fit(Xtrain, ytrain)
training_accuracy = clf.score(Xtrain, ytrain)
test_accuracy = clf.score(Xtest, ytest)
print "############# based on standard predict ################"
print "Accuracy on training data: %0.2f" % (training_accuracy)
print "Accuracy on test data: %0.2f" % (test_accuracy)
print confusion_matrix(ytest, clf.predict(Xtest))
print "########################################################"
return clf, Xtrain, ytrain, Xtest, ytest<jupyter_output><empty_output><jupyter_text>## Linear/Non Linear modelsLinear classifiers:
- Linear Regression
- Logistic Regression
- Linear Discriminant Analysis
- perceptron
- Support Vector Machines
Non Linear classifiers:
- Classification And Regression Trees
- Naive Bayes
- K-Nearest Neighbors
- Bagging and Random Forest
- Boosting and AdaBoost
### 1.Logistic regression<jupyter_code>train.head()
linearTrain=train[['Pclass','Sex','Age','SibSp','Parch','Fare','Embarked','Survived']]
linearTrain=pd.get_dummies(linearTrain)
# a dataframe suitable for linear models such as logistic regression
print linearTrain.shape
linearTrain.head()
#Features names used for the linear models are:
featurenames=['Pclass','Sex','Age','SibSp','Parch','Fare','Embarked_C','Embarked_Q','Embarked_S']
clf_log, Xtrain, ytrain, Xtest, ytest = do_classify(LogisticRegression(penalty='l1'),
{"C": [0.01, 0.1, 1, 10, 100]}, linearTrain,
['Pclass','Sex','Age',
'SibSp','Parch'
,'Fare','Embarked_C','Embarked_Q','Embarked_S']
, 'Survived',1)<jupyter_output>BEST {'C': 10} 0.787781350482 [mean: 0.65595, std: 0.02766, params: {'C': 0.01}, mean: 0.77653, std: 0.03137, params: {'C': 0.1}, mean: 0.78457, std: 0.02701, params: {'C': 1}, mean: 0.78778, std: 0.03199, params: {'C': 10}, mean: 0.78617, std: 0.03194, params: {'C': 100}]
############# based on standard predict ################
Accuracy on training data: 0.80
Accuracy on test data: 0.82
[[155 20]
[ 27 65]]
########################################################
<jupyter_text>*Feature selection can be done in two option*:
- Reduce the number of features(manually selection features or by using algorithm PCA auto-encoder neural networks)
- Regularization (penalizing the useless features L2-norm or sparsifying and eliminating useless features look example above {"C": [0.01, 0.1, 1, 10, 100]} <jupyter_code>linearTrain.corr()<jupyter_output><empty_output><jupyter_text>Selecting features that are highly correlated with our target:
- Sex = -0.541585
- Pclass= -0.335549
- Fare= 0.255290
.....* common mistakes:
- Selecting features or creating new representative ones using PCA
- Cross-validation set is transformed(via PCA) with the training dataset !<jupyter_code># Applying PCA
from sklearn import preprocessing
from sklearn.decomposition import PCA
n_component=2
StandardScaler = preprocessing.StandardScaler()
linearTrainScaledPC = StandardScaler.fit_transform(linearTrain)
pca = PCA(n_components=n_component)
X = pca.fit_transform(linearTrainScaledPC)
print pca.explained_variance_ratio_
print 'Total explained variance=', pca.explained_variance_ratio_.sum()
dic={"pc%i" % (i+1):X[:,i] for i in range(n_component)}
dic.keys()
#lets create a dataframe for the two pricipal components
PCAdf=pd.DataFrame(dic)
PCAdf['Survived']=linearTrain.Survived.values
PCAdf['Pclass']=linearTrain.Pclass.values
PCAdf.head()
# lets try fitting again with these pca's
clf_log, Xtrain, ytrain, Xtest, ytest = do_classify(LogisticRegression(penalty='l2'),
{"C": [0.01, 0.1, 1, 10,100]}, PCAdf,
dic.keys()
, 'Survived',1)
plt.figure()
ax=plt.gca()
points_plot(ax, Xtrain, Xtest, ytrain, ytest, clf_log, alpha=0.2);<jupyter_output><empty_output><jupyter_text>### Standardization AKA Z-score normalization
$$z = \frac{x - \mu}{\sigma}$$
Standardizing the features so that they are centered around 0 with a standard deviation of 1. Some examples of algorithms where feature scaling matters are:
- k-nearest neighbors with an Euclidean distance measure if want all features to contribute equally
- k-means (see k-nearest neighbors)
- logistic regression, SVMs, perceptrons, neural networks etc. if you are using gradient descent/ascent-based optimization, otherwise some weights will update much faster than others
- linear discriminant analysis, principal component analysis, kernel principal component analysis since you want to find directions of maximizing the variance (under the constraints that those directions/eigenvectors/principal components are orthogonal); you want to have features on the same scale since you’d emphasize variables on “larger measurement scales” more. There are many more cases than I can possibly list here … I always recommend you to think about the algorithm and what it’s doing, and then it typically becomes obvious whether we want to scale your features or not.### Min-Max scaling AKA Normalization
$$X_{norm} = \frac{X - X_{min}}{X_{max}-X_{min}}$$
- In this approach, the data is scaled to a fixed range - usually 0 to 1.
- The cost of having this bounded range - in contrast to standardization - is that we will end up with smaller standard deviations, which can suppress the effect of outliers.http://sebastianraschka.com/Articles/2014_about_feature_scaling.html### 3.Support Vector MachinesNeed to specify:
- Choice of parameter C
- Choice of kernel
kernels:
- Linear kernel/logistic regression for n(number of features) large and small m(training examples) (n=10000 m=10-1000)
- Gaussian kernel n small and m intermediate (non linear classification) (n=1-1000 m=10-10000)
- Linear kernel/logistic regression (n=1-1000 m=50000+) create more features<jupyter_code># in our case m=800 and n=10 we use Gaussian kernel
linearTrain.head()
# first lets standarize the data
LineatrainScaled = preprocessing.StandardScaler().fit_transform(linearTrain.drop(['Survived'],axis=1))
LineatrainScaled=pd.DataFrame(LineatrainScaled,columns=['Pclass','Sex','Age','SibSp','Parch','Fare','Embarked_C','Embarked_Q','Embarked_S'])
LineatrainScaled['Survived']=linearTrain.Survived.values
LineatrainScaled.head()
from sklearn.svm import SVC
C_range = np.logspace(-2, 10, 3)
gamma_range = np.logspace(-9, 3, 3)
param_grid = dict(gamma=gamma_range, C=C_range)
print C_range
print gamma_range
clf_SVM, Xtrain, ytrain, Xtest, ytest = do_classify(SVC(),
param_grid, LineatrainScaled,
['Pclass','Sex','Age',
'SibSp','Parch'
,'Fare','Embarked_C','Embarked_Q','Embarked_S']
, 'Survived',1)<jupyter_output>BEST {'C': 10000.0, 'gamma': 0.001} 0.800643086817 [mean: 0.61576, std: 0.00160, params: {'C': 0.01, 'gamma': 1.0000000000000001e-09}, mean: 0.61576, std: 0.00160, params: {'C': 0.01, 'gamma': 0.001}, mean: 0.61576, std: 0.00160, params: {'C': 0.01, 'gamma': 1000.0}, mean: 0.61576, std: 0.00160, params: {'C': 10000.0, 'gamma': 1.0000000000000001e-09}, mean: 0.80064, std: 0.03598, params: {'C': 10000.0, 'gamma': 0.001}, mean: 0.63826, std: 0.03113, params: {'C': 10000.0, 'gamma': 1000.0}, mean: 0.66881, std: 0.02732, params: {'C': 10000000000.0, 'gamma': 1.0000000000000001e-09}, mean: 0.76849, std: 0.02567, params: {'C': 10000000000.0, 'gamma': 0.001}, mean: 0.63344, std: 0.02266, params: {'C': 10000000000.0, 'gamma': 1000.0}]
############# based on standard predict ################
Accuracy on training data: 0.82
Accuracy on test data: 0.85
[[154 12]
[ 28 73]]
########################################################
|
no_license
|
/Methodological Titanic.ipynb
|
Wail13/DataScience-Projects
| 9 |
<jupyter_start><jupyter_text>This is mainly for Cpastone Project<jupyter_code>import pandas as pd
import numpy as np
pd = ("Hello Capstone Project Course!")
print(pd)<jupyter_output>Hello Capstone Project Course!
|
no_license
|
/Capstone Project.ipynb
|
sukihswong/Coursera_Capstone
| 1 |
<jupyter_start><jupyter_text>## Pytorch的交叉验证<jupyter_code>import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
batch_size=200
learning_rate=0.01
epochs=10
'''
载入数据集
'''
train_db = datasets.MNIST('D:/Jupyter/工作准备/pytorch学习/data/MNIST', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
train_loader = torch.utils.data.DataLoader(
train_db,
batch_size=batch_size, shuffle=True)
test_db = datasets.MNIST('D:/Jupyter/工作准备/pytorch学习/data/MNIST', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
test_loader = torch.utils.data.DataLoader(test_db,
batch_size=batch_size, shuffle=True)
print('train:', len(train_db), 'test:', len(test_db))
'''
对训练集进行拆分
'''
train_db, val_db = torch.utils.data.random_split(train_db, [50000, 10000])
print('db1:', len(train_db), 'db2:', len(val_db))
train_loader = torch.utils.data.DataLoader(
train_db,
batch_size=batch_size, shuffle=True)
val_loader = torch.utils.data.DataLoader(
val_db,
batch_size=batch_size, shuffle=True)<jupyter_output>db1: 50000 db2: 10000
|
no_license
|
/Pytorch_pratical_manual/Pytorch数据集划分与交叉验证.ipynb
|
Jiezju/How_To_USE_Pytorch
| 1 |
<jupyter_start><jupyter_text># Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/installation.md) before you start.# Imports<jupyter_code>import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image<jupyter_output><empty_output><jupyter_text>## Env setup<jupyter_code># This is needed to display the images.
%matplotlib inline
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")<jupyter_output><empty_output><jupyter_text>## Object detection imports
Here are the imports from the object detection module.<jupyter_code>from utils import label_map_util
from utils import visualization_utils as vis_util<jupyter_output><empty_output><jupyter_text># Model preparation ## Variables
Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.<jupyter_code># What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90<jupyter_output><empty_output><jupyter_text>## Download Model<jupyter_code>opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())<jupyter_output><empty_output><jupyter_text>## Load a (frozen) Tensorflow model into memory.<jupyter_code>detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')<jupyter_output><empty_output><jupyter_text>## Loading label map
Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine<jupyter_code>label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)<jupyter_output><empty_output><jupyter_text>## Helper code<jupyter_code>def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)<jupyter_output><empty_output><jupyter_text># Detection<jupyter_code># For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)<jupyter_output><empty_output>
|
no_license
|
/ai_mobile_robot_SRL_telegram/object_detection/object_detection_tutorial.ipynb
|
machorro/stradigi-python-robot
| 9 |
<jupyter_start><jupyter_text># Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!<jupyter_code># Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector<jupyter_output><empty_output><jupyter_text>## 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
- $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
- You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!## 2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
**Figure 1** : **1D linear model**
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
**Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions. <jupyter_code># GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
### START CODE HERE ### (approx. 1 line)
J = np.dot(theta,x)
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))<jupyter_output>J = 8
<jupyter_text>**Expected Output**:
** J **
8
**Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.<jupyter_code># GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
"""
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
"""
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))<jupyter_output>dtheta = 2
<jupyter_text>**Expected Output**:
** dtheta **
2
**Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
**Instructions**:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
<jupyter_code># GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
"""
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta+epsilon # Step 1
thetaminus = theta-epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x,theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad-gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator/denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))<jupyter_output>The gradient is correct!
difference = 2.91933588329e-10
<jupyter_text>**Expected Output**:
The gradient is correct!
** difference **
2.9193358103083e-10
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`.
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!## 3) N-dimensional gradient checkingThe following figure describes the forward and backward propagation of your fraud detection model.
**Figure 2** : **deep neural network***LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*
Let's look at your implementations for forward propagation and backward propagation. <jupyter_code>def forward_propagation_n(X, Y, parameters):
"""
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
"""
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache<jupyter_output><empty_output><jupyter_text>Now, run backward propagation.<jupyter_code>def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients<jupyter_output><empty_output><jupyter_text>You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.**How does gradient checking work?**.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary.
**Figure 2** : **dictionary_to_vector() and vector_to_dictionary()** You will need these functions in gradient_check_n()
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
**Exercise**: Implement gradient_check_n().
**Instructions**: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute `J_plus[i]`:
1. Set $\theta^{+}$ to `np.copy(parameters_values)`
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`.
- To compute `J_minus[i]`: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$<jupyter_code># GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)<jupyter_output>[92mYour backward propagation works perfectly fine! difference = 1.18904178788e-07[0m
|
no_license
|
/Improving Deep Neural Networks/Week1/Gradient Checking/Gradient Checking v1.ipynb
|
LeonChen66/Deep-Learning-Specialization
| 7 |
<jupyter_start><jupyter_text># Problem Set 6 problems## Question 1: Constructing hadrons states using SU(3) raising and lowering operators (25 points)### Learning objectives
In this question you will:
- Learn how to find the SU(3) wave functions for baryons using SU(3) raising and lowering operatorsIf we know the SU(3) wave function for one member of an SU(3) multiplet, we can construct the other members of the multiplet using SU(3) raising and lowering operators. Each multiplet has well defined properties under the interachange of two quarks. For example, the spin-3/2 baryon decouplet that contains the $\Delta $ is symmetric under interchange, eg $\Delta ^{+} = \frac{1}{\sqrt{3}} \left (uud+udu+duu \right ) $, while the spin-1/2 octet that contains the proton and neutron is antisymmetric under the interchange of the first two quarks but has no symmetry with respect to the third, eg $ p= \frac{1}{\sqrt{2}}\left (ud-du \right ) u $.### 1a.For the spin-3/2 decouplet, start with the $\Omega^{-} = sss$ baryon with strangeness $S=-3$. Use raising and/or lowering operators to construct the members of the decouplet with $S=-2$, known as the the $\Xi$, and with $S=-1 $, known as the $\Sigma^{*}$.Write your answer here### 1b.Starting from the proton, use raising and/or lowering operators to construct the members if the octet with $S=-1$, known as the $\Sigma$.Write your answer here## Question 2: Leptonic decays of vector mesons (25 points)### Learning objectives
In this question you will:
- Learn how neutral mesons couple to the photonNeutral vector mesons can decay into lepton pairs through
$q\overline{q}$ annihiliation via a virtual photon.This annihilation requires that the wave function of the $q$ and $\overline q$ overlap ($\psi(r=0)\ne 0$), so annihilation is only possible for states with $\ell=0$. Because the photon has spin 1 and angular momentum must be conserved at the interaction vertices, the sum of the quark spins must have spin 1 (since $\ell=0$). The lowest lying octet vector meson states have exactly this quark configuration
In general, these decays have small branching ratios, since they are occur via the electromagnetic rather than the strong interaction. We can calculate the partial width for electromagnetic decays of vector mesons in terms of the quark content of the mesons and the $q\overline{q}$ wave function overlap $|\psi(0)|^2$. The result is:
$$
\Gamma(V\rightarrow \ell^+ \ell^- ) \propto \frac{\alpha^2 Q^2}{M_V^2}
\left | \psi(0) \right |^2
$$
where $\alpha$ is the fine structure constant, $M_V$ is the mass of the meson and $\psi(0) $ is the wave function at the origin. The $Q^2$ abover represents the value of $\left |qe^2\right |^2$ and is discussed below.
Because annihilation into a photon can only occur if the quark and anti-quark in the meson are of the same flavor ($u\overline u $, $d\overline d $ or $s\overline s $ ) we can write the relevant meson wave function $\left | \psi \right > $ in terms of the quarks as:
$$
\left | \psi \right > = \sum_i a_i \left | q_i \overline q_i\right >
$$
With this definition, and noting that our initial $q\overline q$ is annihilating into a photon $ \left < \gamma \right |$, expectation value of $Q^2$ can be written:
\begin{eqnarray*}
Q^2 & = & \left | \left \right |^2 \\
& = & \left | \sum_i a_i \left < \gamma \left | {\cal Q} \right |
q_i \overline q_i\right > \right |^2
\end{eqnarray*}
is the squared sum of the charges of the quarks in the meson (weighted by $a_i$, the appropriate SU(3) Clebsch-Gordon Coefficients),
The charge operator ${\cal Q}$ has the following effect on quark-antiquark pairs:
\begin{eqnarray*}
\left & = & \frac{2}{3} \\
\left & = & -\frac{1}{3} \\
\left & = & - \frac{1}{3} \\
\end{eqnarray*}
Using the quark model assignments for the vector mesons
\begin{eqnarray*}
\rho^{0} & = & \frac{1}{\sqrt{2}} \left ( u\overline u - d\overline d \right )\\
\omega^{0} & = & \frac{1}{\sqrt{2}} \left (u\overline u + d\overline d \right )\\
\phi^{0} & = & s\overline s
\end{eqnarray*}
calculate the ratios of the leptonic decay widths for these 3 mesons.
For this part of the problem, use SU(3) symmetry to make the
approximation that $\psi(0)$ is the same for all 3 states.
Use the Particle Data Book to compare your result to the measured values of these ratios.## Question 3: Hyperfine splitting in the meson multiplets (20 points)### Learning objectives
In this question you will:
- Use concepts first learned in the context of the hydrogen atom to understand mass splittings between vector and pseudoscalar mesonsThe lowest lying pseudoscalar and vector multiplets are both SU(3) nonets (octet plus singlet) with no orbital angular momentum. The mass splittings within a multiplet come from the difference in mass between the $u$, $d$ and $s$ quarks and to a smaller extent from electromagnetic corrections due to the differences in quark charges. Mass splitting between the pseudoscalar and vector multiplets is due to QCD hyperfine splitting. The origin of this term is the same as hyperfine splitting in hydrogen: the interaction between the magnetic moments of the two particles in the bound state.
The hyperfine mass splitting for mesons with quark content $q_1 \overline{q_2}$ has the form:
$$ \Delta M ({\rm meson}) = A \frac{{\bf S}_1 \cdot {\bf S}_2}{m_1 m_2} |\psi(0)|^2$$
where $A$ is a constant, ${\bf S}_i$ is the spin of quark $i$, $m_i$ is the mass of quark $i$ and $|\psi(0)|$ is the wave function at the origin. We cannot calculate $\psi(0)$ from first principles but we can use experimental measurements to determine the combination
$A \left | \psi(0)\right |^2$.
For this problem, use $m_u=m_d=336$ MeV and $m_s=509$ MeV.### 3a.By comparing the masses of the $K^0$ and $K^{0 *}$ mesons, estimate the
value of $A |\psi(0)|^2$.### 3b.Use this value of $A|\psi(0)|^2$ and the difference between the up and strange quark masses to predict the splitting between the $\eta $ and $\omega $. Note: you may assume that the $\eta $ is a pure SU(3) octet state:
$$
\eta = \frac{1}{\sqrt{6}} \left (u\overline u + d\overline d + 2 s\overline s \right )
$$
In the case of the $\omega$, use the wave function given in problem 2.## Question 4: Rutherford Scattering (30 points)### Learning objectives
In this question you will:
- -Review the relationship between impact parameter and scattering angle for non-relativistic Rutherford Scattering in Classical Mechanics - Use the Ruthering Scattering formulae to calculate the cross section into a defined solid angle
### 4a. This problem describes scattering in the non-relativistic limit using Classical Mechanics. Although most of our problems in particle physics need Relativistic Quantum Mechanics, starting with a classical problem provides us with much needed intuition.
Consider a beam of particles of charge $e$ moving in the $+z$-direction. Each particle has kinetic energy $E$. The beam is incident on a target consisting of a single nucleus of charge $Ze$. As shown on slide 6 of the pdf file for lecture 11, the angle of deflection $\theta$ that the particle undergoes is related to its impact parameter $b$ by the expression:
$$
b = \frac{Ze^2}{8\pi\epsilon_0 E} \cot \frac{\theta}{2}
$$
A real Rutherford scattering experiment would have a finite sized target. To keep things simple, for this problem we will focus on a single nucleus. That menas the beam size and intensities given below have been adjusted to match this artificial situation.Assume that the beam is centered on the target and has a circular cross sectional area with radius $R$. The beam particles are uniformly distributed within the beam. Take the beam current to be $10^{-12}$ Amps. We will use a gold target ($Z=79$), and a beam with $E=6$ MeV. We'll use an unrealistically small beam size, comparable to the size of a gold atom: $R=10^{-12}$ m. Assume that the beam is turned on for 1 second.
Calculate how many particles are incident on the target and make a histogram with this number of entries that shows the distribution of the scattering angle $\theta$ (in radians). Display the histogram with a logrithmic $y$-axis.
When solving this problem you are free to use the code below, but are not required to do so. The following function returns an array of $N\;\;\left [ x,y \right ]$-positions of the particles within the beam. (The impact parameter is $b=\sqrt{x^2+y^2} $.)<jupyter_code>import numpy as np
import random
import math
import matplotlib.pyplot as plt
def makeData(N,R,seed):
np.random.seed(seed)
xout = []
yout = []
numToGen=int(1.1*(math.pi)*N)
i = 0
xin = np.random.uniform(-1.0,1.0,numToGen)
yin = np.random.uniform(-1.0,1.0,numToGen)
rsq = xin*xin+yin*yin
while len(xout)<N and i<numToGen:
if rsq[i] < 1:
xout.append(R*xin[i])
yout.append(R*yin[i])
i += 1
return xout,yout<jupyter_output><empty_output><jupyter_text>You may use following definitions of physical constants and the parameters:<jupyter_code># Constants
# atomic number for gold
Z = 79
# Charge of the electron in Coulombs
e = 1.602e-19
#Kinetic energy of a beam particle in eV
EineV = 6.0e6
# Translation of the kinetic energy to Joules
E = EineV*e
# E&M epsilon_0
epsilon0 = 8.854e-12
#Write your answer here<jupyter_output><empty_output><jupyter_text>### 4b. A detector that subtends a solid angle $\Delta \theta = \frac{\pi}{8}$ and $\Delta \phi = \frac{\pi}{2}$ centered at $\theta = \frac{\pi}{4}$ and $\phi=\pi$. How many particles hit the detector? Note: you are expected to answer this question using your code from part (a); you are not expected to do an analytic calculation.<jupyter_code>#Write your answer here<jupyter_output><empty_output><jupyter_text>### 4c. Using the expression for cross section, find the cross section
$$
\sigma = \int \frac{d\sigma}{d\Omega} d\Omega
$$
where the integral corresponds to the solid angle of the detector.<jupyter_code>#Write your answer here<jupyter_output><empty_output>
|
no_license
|
/Problem Set 6/Problem Set 6 problems.ipynb
|
mdshapiroLBL/phy129_fall_2020
| 4 |
<jupyter_start><jupyter_text>### Read ResGEN outputs<jupyter_code>def vlgan_json(path):
vlgan_predictions = {}
with open(path) as vlganf:
for line in vlganf:
po = json.loads(line)
vlgan_predictions[po['0']['info']['name']] = po
return vlgan_predictions
vlgan_predictions = vlgan_json('multiwoz/VLGAN/multiwoz_predictions_Nov-21.json')
len(vlgan_predictions)<jupyter_output><empty_output><jupyter_text>### Augment ResGEN json<jupyter_code>hdsa_predictions = json.load(open('multiwoz/HDSA/results.txt.pred.non_delex'))
def augment(vlgan_preds, tag):
for i in vlgan_preds:
hdsa_sample, vlgan_sample = hdsa_predictions[i.replace('.json', '')], vlgan_preds[i]
for t in vlgan_sample:
# HDSA only predicts at agent steps!
_t = int(t)
if _t%2 == 1:
vlgan_preds[i][t]['hdsa'] = hdsa_sample[_t//2]
augment(vlgan_predictions, 'hdsa')
vlgan_predictions = list(vlgan_predictions.values())<jupyter_output><empty_output><jupyter_text>### [HIDDEN] tags<jupyter_code>from utils import Lang
import pickle
lang = pickle.load(open('../d_vocab_lang.pickle', 'rb'))
def trickMe(s):
s = s.replace('<unk>', 'jldifuwlaf') # No such word!
s = ' '.join(lang.decodeSentence(lang.encodeSentence(s)))
s = s.capitalize()
s = s.replace('<unk>', '[HIDDEN]')
return s
for i, example in enumerate(vlgan_predictions):
for t in example:
keys = ['gold', 'resgen', 'hdsa']
for key in keys:
if key in example[t]:
if key == 'resgen':
example[t]['resgen'] = [trickMe(r) for r in example[t]['resgen']]
else:
example[t][key] = trickMe(example[t][key])
vlgan_predictions[i] = example<jupyter_output><empty_output><jupyter_text>### To file<jupyter_code>with open('../outputs/multiwoz_predictions_combined-Nov-21.json', 'w') as outf:
for entry in vlgan_predictions:
outf.write(json.dumps(entry) + '\n')<jupyter_output><empty_output>
|
no_license
|
/preprocessing/HDSA.ipynb
|
bsantraigi/dialog-human-labeling
| 4 |
<jupyter_start><jupyter_text>1. Read and explore the given dataset. <jupyter_code># Importing Libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Ignoring warning
import warnings;warnings.simplefilter('ignore')
# reading the csv data on a Dataframe
amazon = pd.read_csv('ratings_Electronics.csv')
amazon.head()
# Adding col names to the dataframe
amazon.columns = ['userid','itemid','rating','Tstamp']
# making a copy of the dataframe to work with
amazon.head()
df = amazon.copy()
# Droping the timestamp data as it is not required for recommendation
df = df.drop(labels=['Tstamp'],axis=1)
df.head()
# Knowing the size of the dataframe i.e no of records and columns
df.shape
# Getting the datatype for the columns
# The datatype is in required format
df.info()
# Finding if nay null value is present or not
# There are no null value in the dataframe
df.isna().sum()
# Knowing the statistical distribution of the ratings given by users to diff items
# Max rating given is 5
# Min rating given is 1
# Mean is less than median so the distributin=on is left skewed i.e more items are given max ratings
df[['rating']].describe().transpose()
# Getting the distribution of items rating vise
df[['itemid','rating']].groupby('rating').count()
# Plotting the distribution on the bar plot
sns.countplot(df['rating'])
# Getting the unique no of users and the items
print('unique user :',df['userid'].nunique())
print('unique items :',df['itemid'].nunique())<jupyter_output>unique user : 4201696
unique items : 476001
<jupyter_text>2. Take a subset of the dataset to make it less sparse/ denser. <jupyter_code># Getting the count of items for which each user has given ratings
cnt = df.groupby('userid').size().sort_values(ascending=False)
cnt[:10]
# making a dataframe for the users rateted more than 50
df2 = df[df['userid'].isin(cnt[cnt>=50].index)]
df2.head()
# Shape of the reduced dataframe
df2.shape
# distribution ofthe data w.r.t the rating
sns.countplot(df2['rating'])
# Unique users and items in new dataframe
print('unique user :',df2['userid'].nunique())
print('unique items :',df2['itemid'].nunique())
# converting the dataframe in a matrix form with ratings as the value
mat = df2.pivot(index='userid',columns='itemid',values='rating')
# fillling all the user item point with the 0 value for which users has not given any rating
mat = mat.fillna(0)
mat.head()
# total no of no zero rating in the matrix and the sahpe of the mat
no_of_rate = np.count_nonzero(mat)
mat.shape
# the records for which items are rated
no_of_rate
#total no of rating position available
no_of_tot_rate = mat.shape[0] * mat.shape[1]
no_of_tot_rate
# finding the density of the rating
dens = no_of_rate / no_of_tot_rate
density = dens * 100
density
# transposing the matrix so that the items are in row and users are in col
# This is required for item based recommendation
mat1 = mat.T
mat1.head()<jupyter_output><empty_output><jupyter_text>3. Split the data randomly into train and test dataset<jupyter_code># Slitting the data in train and test set
from sklearn.model_selection import train_test_split
train,test = train_test_split(df2,test_size=.30,random_state=1)<jupyter_output><empty_output><jupyter_text>4. Building popularity based recommendation system<jupyter_code>train.head(5)
train_data_grouped = train.groupby('itemid').agg({'userid': 'count'}).reset_index()
train_data_grouped.rename(columns = {'userid': 'score'},inplace=True)
train_data_sort = train_data_grouped.sort_values(['score', 'itemid'], ascending = [0,1])
train_data_sort['Rank'] = train_data_sort['score'].rank(ascending=0, method='first')
popularity_recommendations = train_data_sort.head(5)
popularity_recommendations
def recommend(user_id):
user_recommendations = popularity_recommendations
#Add user_id column for which the recommendations are being generated
user_recommendations['userId'] = user_id
#Bring user_id column to the front
cols = user_recommendations.columns.tolist()
cols = cols[-1:] + cols[:-1]
user_recommendations = user_recommendations[cols]
return user_recommendations
users = [15,25,30]
for i in users:
print('The recommendations for user :%d' %(i))
print(recommend(i))
print('\n')<jupyter_output>The recommendations for user :15
userId itemid score Rank
30797 15 B0088CJT4U 155 1.0
19529 15 B003ES5ZUU 124 2.0
8601 15 B000N99BBC 122 3.0
30194 15 B007WTAJTO 112 4.0
30489 15 B00829TIEK 100 5.0
The recommendations for user :25
userId itemid score Rank
30797 25 B0088CJT4U 155 1.0
19529 25 B003ES5ZUU 124 2.0
8601 25 B000N99BBC 122 3.0
30194 25 B007WTAJTO 112 4.0
30489 25 B00829TIEK 100 5.0
The recommendations for user :30
userId itemid score Rank
30797 30 B0088CJT4U 155 1.0
19529 30 B003ES5ZUU 124 2.0
8601 30 B000N99BBC 122 3.0
30194 30 B007WTAJTO 112 4.0
30489 30 B00829TIEK 100 5.0
<jupyter_text>As it is a popularity based recommendation model all users are getting same recommendations5. Build Collaborative Filtering model.<jupyter_code># We already have mat1 in required format
mat1.head(),mat1.shape
# User-User similarity
mat_user_CF = mat1.T
mat_user_CF['userId']=np.arange(0,len(mat_user_CF),1)
mat_user_CF.set_index(['userId'],inplace=True)
mat_user_CF.head()<jupyter_output><empty_output><jupyter_text># Since the data set does not have ratings available for all items from users
# The dataset is highly sparsed
# To deal with sparse dataset the best way is SVD or Matrix Factorization approach<jupyter_code># Using scipy library for matrix decomposition
from scipy.sparse.linalg import svds
U,sigma,Vt = svds(mat_user_CF,k=50)
#Converting the sigma into diagonal matrix
sigma = np.diag(sigma)
user_pred_rating = np.dot(np.dot(U,sigma),Vt)
pred_df = pd.DataFrame(user_pred_rating,columns=mat_user_CF.columns)
pred_df.head()
# all ratings with 0 values are replaced with possible ratings
# Recommend the items with highest predicted rating
def recommend_item(UID,mat_user_CF,pred_df,num_recommendation):
user_idx = UID-1 # user start at 1 and not 0
sorted_user_rating = mat_user_CF.iloc[user_idx].sort_values(ascending=False)
sorted_user_pred = pred_df.iloc[user_idx].sort_values(ascending=False)
temp = pd.concat([sorted_user_rating,sorted_user_pred],axis=1)
temp.index.name = 'Recommended item'
temp.columns = ['user_rating','user_pred']
temp = temp.loc[temp.user_rating==0]
temp = temp.sort_values('user_pred',ascending=False)
print('The recommended items for user:{}'.format(UID))
print(temp.head(num_recommendation))
<jupyter_output><empty_output><jupyter_text>7. Get top - K ( K = 5) recommendations<jupyter_code># recommendation for the users
UID = [10,20,121]
num_recommendation = 5
for i in UID:
recommend_item(i,mat_user_CF,pred_df,num_recommendation)<jupyter_output>The recommended items for user:10
user_rating user_pred
Recommended item
B007WTAJTO 0.0 1.330201
B001TH7GUU 0.0 1.216939
B0019EHU8G 0.0 0.934725
B000VX6XL6 0.0 0.842173
B000QUUFRW 0.0 0.732741
The recommended items for user:20
user_rating user_pred
Recommended item
B004CLYEDC 0.0 1.228042
B000N99BBC 0.0 1.110763
B008DWCRQW 0.0 1.106711
B0088CJT4U 0.0 1.105205
B002SQK2F2 0.0 0.747075
The recommended items for user:121
user_rating user_pred
Recommended item
B000LRMS66 0.0 0.543927
B002WE4HE2 0.0 0.423175
B000KO0GY6 0.0 0.416801
B001XURP7W 0.0 0.356788
B005HMKKH4 0.0 0.352138
<jupyter_text>6. Model Evaluation<jupyter_code>mat_user_CF.head()
mat_user_CF.mean().head()
pred_df.mean().head()
rmse_df = pd.concat([mat_user_CF.mean(),pred_df.mean()],axis=1)
rmse_df.columns = ['Avg_rating','Avg_pred']
rmse_df.shape
rmse_df['item_idx'] = np.arange(0,len(rmse_df),1)
rmse_df.head()<jupyter_output><empty_output><jupyter_text>6. Evaluate both the models. Based on RMSE<jupyter_code>RMSE = round((((rmse_df.Avg_rating - rmse_df.Avg_pred)**2).mean()**.5),5)
print('RMSE of SVD model is:{}'.format(RMSE))<jupyter_output>RMSE of SVD model is:0.00275
<jupyter_text># CF model using surprise package<jupyter_code># We will use surprise library for CF and will use KNNWithMeans
# we take 10000 records as for memory issue
from surprise import Reader, Dataset,SVD,accuracy,Prediction,KNNWithMeans
reader = Reader(rating_scale=(1,5))
data = Dataset.load_from_df(df2.head(10000)[['userid', 'itemid', 'rating']], reader)
# spliting the data in train test
from surprise.model_selection import train_test_split
trainset, testset = train_test_split(data, test_size=0.30,random_state=1)
# creating user records
user_records = trainset.ur
#value for user 0 with inner ids of items for which the user has given ratings
user_records[0]
print(trainset.to_raw_iid(100))
knn = KNNWithMeans(51,sim_options={'name':'pearson','user_based':False})
knn.fit(trainset)
# Finding the prediction
pred = knn.test(testset)
# The accuracy of the model in terms of rmse
accuracy.rmse(pred)
# prediction for user 110 where 110 is the inner id
# was impossible equals false means that there are no enough inforamtion available with respect to the user and the ratings
# This is beacuse of sparcity of the data matrix where not all the ratings are available for the user item matrix.
pred[110]
# converting the pred into dataframe
pred_df = pd.DataFrame(pred)
pred_df.head()
# sorting the value based on userid and estimate
pred_df.sort_values(by=['uid','est'],ascending=False,inplace=True)
pred_df.head()
# top 5 prediction based on user
top5pred = pred_df.groupby('uid').head(5)
top5pred.head(20)
# prediction for user w.r.t an item
knn.predict(uid='AZOK5STV85FBJ',iid='B00003006E')
# building new test set for all the records from trainset where no ratings are available
testset_new = trainset.build_anti_testset()
len(testset_new)
testset_new[0:5]
# prediction for new testset with 10000 records only and calculation the rmse
prediction = knn.test(testset_new[0:10000])
accuracy.rmse(prediction)
prediction_df = pd.DataFrame([x.uid,x.iid,x.est] for x in prediction)
prediction_df.columns = ['uid','iid','est']
prediction_df.head()
prediction_df.sort_values(['uid','est'],inplace=True,ascending=False)
prediction_df.head(10)
# to recommend top 5 items for each user
top_10 = prediction_df.groupby('uid').head(10).reset_index(drop=True)
top_10<jupyter_output><empty_output><jupyter_text>Using SVD appraoch from Surprise package<jupyter_code># Using original dataset and filtering the data for the items having ratings from more than 300 users.
# Or in other words items that are appearing in the dataset for more than 300 times
item_count = df['itemid'].value_counts(ascending=False)
item_count
# taking all the items that are having more than 500 rating count
pop_item = item_count.loc[item_count.values > 8000].index
len(pop_item)
# creating the dataframe from the original dataframe with item entries that are there in pop_item
df3 = df.loc[df.itemid.isin(pop_item)]
# the data is reduce and only took the records for items for which more than 300 ratings are being given
df3.shape
# loading dataset in surprise
data1 = Dataset.load_from_df(df3[['userid','itemid','rating']],reader)
# Splitting the dataset
from surprise.model_selection import train_test_split
trainset_1,testset_1 = train_test_split(data1,random_state=1,test_size=.30)
svd = SVD(n_factors=50,biased=False)
svd.fit(trainset_1)
pred_svd = svd.test(testset_1)
pred_svd
pred_svd_df = pd.DataFrame([x.uid,x.iid,x.est] for x in pred_svd)
pred_svd_df.columns = ['uid','iid','est']
pred_svd_df.head()
pred_svd_df.sort_values(['uid','est'],ascending=False,inplace=True)
top_10_pred_svd = pred_svd_df.groupby('uid').head(10).reset_index(drop=True)
top_10_pred_svd
# since the data is taken for items that are rated for more than 8000 we get high rmse
# otherwise rmse for svd model as calculated above is very less
accuracy.rmse(pred_svd)
user_fac = svd.pu
item_fac = svd.qi
user_fac.shape,item_fac.shape
#the left matrix is 107941x50
# the latent matrix is 50x50 and the transpose of right singular matrix is 50X14
# 107941 are the user
# 14 are the item nos
# calculating the dot product
predic = np.dot(user_fac,np.transpose(item_fac))
# prediction for user 1
predic[1,0:3]
for i in range(0,3,1):
print(svd.predict(uid=trainset_1.to_raw_uid(1),iid=trainset_1.to_raw_iid(i)))
# The estimation are same for user 1 w.r.t the three item id<jupyter_output><empty_output><jupyter_text>HyperTuning the SVD model<jupyter_code>from surprise.model_selection import GridSearchCV
param = {'n_factors':[50],'reg_all':[0.01,0.03]}
GSCV = GridSearchCV(SVD,param_grid=param,measures=['rmse'],cv=3,refit=True)
GSCV.fit(data1)
GSCV.best_params
item_fac
# finding corelation between the items
item_sim = np.corrcoef(item_fac)
# sorting with the max values
max_val = (-item_sim).argsort()
# converting item-item corr matrix to dataframe
topk = pd.DataFrame(max_val[:,0:10])
# converting inner id to raw id
all_item = [trainset_1.to_raw_iid(x) for x in range(0,14)]
# creating dict of all items having range and raw id as the key value
item_iid_dict = dict(zip(range(0,14),all_item))
topk = topk.replace(item_iid_dict)
topk['item'] = all_item
topk[0:5]
# above shows the similarity between the items
# It can be seen from the output of cell 402 and 403 that the user: AQSGYIUTTW6XZ is recommended items that are similar..
# .. and can be seen in 1st column of item-item similarity matrix<jupyter_output><empty_output>
|
no_license
|
/Project_Recommendation_System.ipynb
|
nishantpatil22/Product-Recommendation-Systems
| 12 |
<jupyter_start><jupyter_text>
Физтех-Школа Прикладной математики и информатики (ФПМИ) МФТИ---Нейрон с сигмоидой---<jupyter_code>from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
import pandas as pd<jupyter_output><empty_output><jupyter_text>Напомним, что сигмоидальная функция (сигмоида) выглядит так:
$$\sigma(x)=\frac{1}{1+e^{-x}}$$В данном случае мы снова решаем задачу бинарной классификации (2 класса: 1 или 0), но здесь уже будет другая функция активации:
$$MSE\_Loss(\hat{y}, y) = \frac{1}{n}\sum_{i=1}^{n} (\hat{y_i} - y_i)^2 = \frac{1}{n}\sum_{i=1}^{n} (\sigma(w \cdot X_i) - y_i)^2$$
Здесь $w \cdot X_i$ - скалярное произведение, а $\sigma(w \cdot X_i) =\frac{1}{1+e^{-w \cdot X_i}} $ - сигмоида.
**Примечание:** В формуле предполагается, что $b$ - свободный член - является частью вектора весов: $w_0$. Тогда, если к $X$ приписать слева единичный столбец, в скалярном произведении $b$ будет именно как свободный член. При реализации класса $b$ нужно считать отдельно (чтобы было нагляднее).Можем взять производную лосса по весам и спускаться в пространстве весов в направлении наискорейшего убывания функции лосса. Формула для обновления весов в градиентном спуске такая:$$w^{j+1} = w^{j} - \alpha \frac{\partial Loss}{\partial w} (w^{j})$$$w^j$ -- вектор весов на $j$-ой итерацииРаспишем дальше:* Для веса $w_j$:$$ \frac{\partial Loss}{\partial w_j} =
\frac{2}{n} \sum_{i=1}^n \left(\sigma(w \cdot x_i) - y_i\right)(\sigma(w \cdot x_i))_{w_j}' = \frac{2}{n} \sum_{i=1}^n \left(\sigma(w \cdot x_i) - y_i\right)\sigma(w \cdot x_i)(1 - \sigma(w \cdot x_i))x_{ij}$$* Градиент $Loss$'а по вектору весов -- это вектор, $j$-ая компонента которого равна $\frac{\partial Loss}{\partial w_j}$ (помним, что весов всего $m$):$$\begin{align}
\frac{\partial Loss}{\partial w} &= \begin{bmatrix}
\frac{2}{n} \sum_{i=1}^n \left(\sigma(w \cdot x_i) - y_i\right)\sigma(w \cdot x_i)(1 - \sigma(w \cdot x_i))x_{i1} \\
\frac{2}{n} \sum_{i=1}^n \left(\sigma(w \cdot x_i) - y_i\right)\sigma(w \cdot x_i)(1 - \sigma(w \cdot x_i))x_{i2} \\
\vdots \\
\frac{2}{n} \sum_{i=1}^n \left(\sigma(w \cdot x_i) - y_i\right)\sigma(w \cdot x_i)(1 - \sigma(w \cdot x_i))x_{im}
\end{bmatrix}
\end{align}=\frac{1}{n} X^T (\sigma(w \cdot X) - y)\sigma(w \cdot X)(1 - \sigma(w \cdot X))$$Реализуем сигмоиду и её производную:<jupyter_code>def sigmoid(x):
"""Сигмоидальная функция"""
return 1.0 / (1.0 + np.exp(-x))
def sigmoid_derivative(x):
"""Производная сигмоиды"""
return sigmoid(x) * (1.0 - sigmoid(x))<jupyter_output><empty_output><jupyter_text>Теперь нужно написать нейрон с сигмоидной функцией активации. Здесь всё очень похоже на перцептрон, но будут по-другому обновляться веса и другая функция активации:<jupyter_code>def loss(y_pred, y):
'''
Считаем среднеквадратичную ошибку
'''
y_pred = y_pred.reshape(-1,1)
y = np.array(y).reshape(-1,1)
return 0.5 * np.mean((y_pred - y) ** 2)
class Neuron:
def __init__(self, w=None, b=0):
"""
:param: w -- вектор весов
:param: b -- смещение
"""
# пока что мы не знаем размер матрицы X, а значит не знаем, сколько будет весов
self.w = w
self.b = b
def activate(self, x):
return sigmoid(x)
def forward_pass(self, X):
"""
Рассчитывает ответ нейрон при предъявлении набора объектов
:param: X -- матрица примеров размера (n, m), каждая строка - отдельный объект
:return: вектор размера (n, 1) из нулей и единиц с ответами нейрона
"""
cell_body_sum = np.sum(X.T * self.w) + self.b
return self.activate(cell_body_sum)
def backward_pass(self, X, y, y_pred, learning_rate=0.1):
"""
Обновляет значения весов нейрон в соответствии с этим объектом
:param: X -- матрица входов размера (n, m)
y -- вектор правильных ответов размера (n, 1)
learning_rate - "скорость обучения" (символ alpha в формулах выше)
В этом методе ничего возвращать не нужно, только правильно поменять веса
с помощью градиентного спуска.
"""
delta = X.T * (y_pred - y) * y_pred * (1 - y_pred) / X.shape[0]
self.w -= learning_rate * delta
self.b -= learning_rate * delta
def fit(self, X, y, num_epochs=5000):
"""
Спускаемся в минимум
:param: X -- матрица объектов размера (n, m)
y -- вектор правильных ответов размера (n, 1)
num_epochs -- количество итераций обучения
:return: loss_values -- вектор значений функции потерь
"""
self.w = np.zeros((X.shape[1], 1)) # столбец (m, 1)
self.b = 0 # смещение
loss_values = [] # значения функции потерь на различных итерациях обновления весов
for i in range(num_epochs):
y_pred = self.forward_pass(X)
losses.append(loss(y_pred, y))
self.backward_pass(X, y, y_pred)
return loss_values<jupyter_output><empty_output><jupyter_text>**Тестирование нейрона**Протестировуем новый нейрон **на тех же данных** по аналогии с тем, как это было проделано с перцептроном. **Проверка forward_pass()**<jupyter_code>w = np.array([1., 2.]).reshape(2, 1)
b = 2.
X = np.array([[1., 3.],
[2., 4.],
[-1., -3.2]])
neuron = Neuron(w, b)
y_pred = neuron.forward_pass(X)
print ("y_pred = " + str(y_pred))<jupyter_output>y_pred = 0.9999908339962802
<jupyter_text>**Проверка backward_pass()**<jupyter_code>y = np.array([1, 0, 1]).reshape(3, 1)
neuron.backward_pass(X, y, y_pred)
print ("w = " + str(neuron.w))
print ("b = " + str(neuron.b))<jupyter_output><empty_output><jupyter_text>Посмотрим, как меняется функция потерь в течение процесса обучения на реальных данных - датасет "Яблоки и Груши":<jupyter_code>data = pd.read_csv("./data/apples_pears.csv")
data.head()
plt.figure(figsize=(10, 8))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=data['target'], cmap='rainbow')
plt.title('Яблоки и груши', fontsize=15)
plt.xlabel('симметричность', fontsize=14)
plt.ylabel('желтизна', fontsize=14)
plt.show();<jupyter_output><empty_output><jupyter_text>Обозначим, что здесь признаки, а что - классы:<jupyter_code>X = data.iloc[:,:2].values # матрица объекты-признаки
y = data['target'].values.reshape((-1, 1)) # классы (столбец из нулей и единиц)<jupyter_output><empty_output><jupyter_text>**Вывод функции потерь**
Функция потерь должна убывать и в итоге стать близкой к 0<jupyter_code>%%time
neuron = Neuron()
Loss_values = neuron.fit(X, y)
plt.figure(figsize=(10, 8))
plt.plot(Loss_values)
plt.title('Функция потерь', fontsize=15)
plt.xlabel('номер итерации', fontsize=14)
plt.ylabel('$Loss(\hat{y}, y)$', fontsize=14)
plt.show()<jupyter_output><empty_output><jupyter_text>Посмотрим, как нейрон классифицировал объекты из выборки:<jupyter_code>plt.figure(figsize=(10, 8))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=np.array(neuron.forward_pass(X) > 0.5).ravel(), cmap='spring')
plt.title('Яблоки и груши', fontsize=15)
plt.xlabel('симметричность', fontsize=14)
plt.ylabel('желтизна', fontsize=14)
plt.show();<jupyter_output><empty_output><jupyter_text>На самом деле то, что вы здесь наблюдаете (плохое качество разделения) -- последствие **затухающих градиентов (vanishing gradients)**. Мы позже ещё поговорим об этом (то есть о том, почему это происходит в данном случае).
Попробуем увеличить количество итераций градиентного спуска (50k итераций):<jupyter_code>%%time
neuron = <Ваш код здесь>
loss_values = <Ваш код здесь> # num_epochs=50000
plt.figure(figsize=(10, 8))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=np.array(neuron.forward_pass(X) > 0.5).ravel(), cmap='spring')
plt.title('Яблоки и груши', fontsize=15)
plt.xlabel('симметричность', fontsize=14)
plt.ylabel('желтизна', fontsize=14)
plt.show();<jupyter_output><empty_output>
|
no_license
|
/[5]oop_neuron/seminar/new_seminar/.ipynb_checkpoints/[seminar]neuron_new-checkpoint.ipynb
|
sergey-yushanov/dlschool
| 10 |
<jupyter_start><jupyter_text>### Decision Tree Class<jupyter_code>class DecisionTree:
# constructor
def __init__(self, depth=0, max_depth = 5):
self.left = None
self.right = None
self.fkey = None
self.fval = None
self.max_depth = max_depth
self.depth = depth
self.target = None # o/p var
def train(self, data):
features = ['pclass', 'sex', 'age', 'sibsp', 'parch', 'fare']
info_gain = []
for f in features:
i_g = information_gain(data, f, data[f].mean())
info_gain.append(i_g)
self.fkey = features[np.argmax(info_gain)]
self.fval = data[self.fkey].mean()
print("DT is choosing feature - ", self.fkey)
# split the data
left, right = divide_data(data, self.fkey, self.fval)
# ====================================
# STOPPING CONDITIONS
# ====================================
# CASE 1 - WHEN NODE IS PURE
if left.shape[0] == 0 or right.shape[0] == 0:
if data['survived'].mean() >=0.5:
self.target = 'survived'
else:
self.target = 'dead'
return
# CASE II - WHEN MAX_DEPTH IS REACHED
if (self.depth >= self.max_depth):
if data['survived'].mean() >=0.5:
self.target = 'survived'
else:
self.target = 'dead'
return
# ====================================
# RECURSION CASE
# ====================================
self.left = DecisionTree(depth= self.depth+1, max_depth= self.max_depth)
self.left.train(left)
self.right = DecisionTree(depth= self.depth+1 , max_depth= self.max_depth)
self.right.train(right)
def predict(self, test):
if test[self.fkey]> self.fval:
# go to right
if self.right is None:
# this is lead Node
return self.target
return self.right.predict(test)
else:
# go to left
if self.left is None:
return self.target
return self.left.predict(test)
split = int(0.7*1009)
train_data = data[:split]
test_data = data[split:]
train_data.shape
test_data.shape
model = DecisionTree(max_depth=3)
model.train(train_data)
print(model.fkey)
print(model.fval)
print(model.left.fkey)
print(model.left.fval)
print(model.right.fkey)
print(model.right.fval)
test_data.iloc[1]
predictions = []
for i in range(test_data.shape[0]):
p = model.predict(test_data.iloc[i])
predictions.append(p)
le = LabelEncoder()
predictions = le.fit_transform(predictions)
predictions
y_test = test_data['survived'].astype('int').values
# Accuracy
np.sum(predictions == y_test)/len(y_test)<jupyter_output><empty_output><jupyter_text>## Sklearn Decision Tree<jupyter_code>from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
sk_tree = DecisionTreeClassifier(criterion='entropy')
sk_tree.fit(train_data[input_cols], train_data[output_cols])
sk_tree_prediction = sk_tree.predict(test_data[input_cols]).astype('int')<jupyter_output><empty_output><jupyter_text>### Scores<jupyter_code>sk_tree.score(train_data[input_cols], train_data[output_cols])
sk_tree.score(test_data[input_cols], test_data[output_cols])
sk_tree.feature_importances_
input_cols<jupyter_output><empty_output>
|
no_license
|
/session 19/Titanic.ipynb
|
junaidiqbalsyed/DSLVNov20
| 3 |
<jupyter_start><jupyter_text>Next, let's define a function that will preprocess the image, which is originally in BGR8 / HWC format.<jupyter_code>def draw_joints(image, joints):
count = 0
for i in joints:
if i==[0,0]:
count+=1
if count>= 3:
return
for i in joints:
cv2.circle(image, (i[0],i[1]), 2, (0,0,255), 1)
cv2.circle(image, (joints[0][0],joints[0][1]), 2, (255,0,255), 1)
for i in hand_pose['skeleton']:
if joints[i[0]-1][0]==0 or joints[i[1]-1][0] == 0:
break
cv2.line(image, (joints[i[0]-1][0],joints[i[0]-1][1]), (joints[i[1]-1][0],joints[i[1]-1][1]), (0,255,0), 1)
from jetcam.usb_camera import USBCamera
from jetcam.csi_camera import CSICamera
from jetcam.utils import bgr8_to_jpeg
camera = USBCamera(width=WIDTH, height=HEIGHT, capture_fps=30, capture_device=1)
#camera = CSICamera(width=WIDTH, height=HEIGHT, capture_fps=30)
camera.running = True
import ipywidgets
from IPython.display import display
image_w = ipywidgets.Image(format='jpeg', width=224, height=224)
display(image_w)
def execute(change):
image = change['new']
data = preprocess(image)
cmap, paf = model_trt(data)
cmap, paf = cmap.detach().cpu(), paf.detach().cpu()
counts, objects, peaks = parse_objects(cmap, paf)
joints = preprocessdata.joints_inference(image, counts, objects, peaks)
draw_joints(image, joints)
#draw_objects(image, counts, objects, peaks)# try this for multiple hand pose prediction
image_w.value = bgr8_to_jpeg(image[:, ::-1, :])
execute({'new': camera.value})
camera.observe(execute, names='value')
camera.unobserve_all()
#camera.running = False<jupyter_output><empty_output>
|
permissive
|
/live_hand_pose.ipynb
|
wei-tian/trt_pose_hand
| 1 |
<jupyter_start><jupyter_text># Standard Data Types
The data stored in memory can be of many types. For example, a person's age is stored as a numeric value and his or her address is stored as alphanumeric characters. Python has various standard data types that are used to define the operations possible on them and the storage method for each of them.
Python has five standard data types −
* Numbers
* String
* List
* Tuple
* Dictionary## Python Numbers
Number data types store numeric values. Number objects are created when you assign a value to them. For example −
<jupyter_code>var1 = 1
var2 = 10<jupyter_output><empty_output><jupyter_text>You can also delete the reference to a number object by using the del statement. The syntax of the del statement is −
```
del var1[,var2[,var3[....,varN]]]]
```<jupyter_code>del var1, var2<jupyter_output><empty_output><jupyter_text>Python supports four different numerical types −
* int (signed integers)
* long (long integers, they can also be represented in octal and hexadecimal)
* float (floating point real values)
* complex (complex numbers)
Here are some examples of numbers −
| int | long | float | complex |
|-----|------|-------|---------|
|10 | 51924361L | 0.0 | 3.14j |
|100 | -0x19323L | 15.20 | 45.j |
|-786 | 0122L | -21.9 | 9.322e-36j |
|080 | 0xDEFABCECBDAECBFBAEl | 32.3+e18 | .876j |
|-0490 | 535633629843L | -90. | -.6545+0J |
|-0x260 | -052318172735L | -32.54e100 | 3e+26J |
| 0x69 | -4721885298529L 70.2-E12 | 4.53e-7j |
* Python allows you to use a lowercase l with long, but it is recommended that you use only an uppercase L to avoid confusion with the number 1. Python displays long integers with an uppercase L.
* A complex number consists of an ordered pair of real floating-point numbers denoted by x + yj, where x and y are the real numbers and j is the imaginary unit.# Working with strings
Recall from the previous section that strings can be entered with single, double or triple quotes:
```python
'All', "of", '''these''', """are
valid strings"""
```
**Unicode:** Python supports unicode strings - however for the most part this will be ignored in here. If you are workign in an editor that supports unicode you can use non-ASCII characters in strings (or even for variable names). Alternatively typing something like `"\u00B3"` will give you the string "³" (superscript-3).
## The Print StatementAs seen previously, The `print()` function prints all of its arguments as strings, separated by spaces and follows by a linebreak:
- print("Hello World")
- print("Hello",'World')
- print("Hello", )
Note that `print` is different in old versions of Python (2.7) where it was a statement and did not need parentheses around its arguments.<jupyter_code>print("Hello","World")<jupyter_output>Hello World
<jupyter_text>The print has some optional arguments to control where and how to print. This includes `sep` the separator (default space) and `end` (end charcter) and `file` to write to a file. When writing to a file, setting the argument `flush=True` may be useful to force the function to write the output immediately. Without this Python may buffer the output which helps to improve the speed for repeated calls to print(), but isn't helpful if you are, for example, wanting to see the output immediately during debugging)<jupyter_code>print("Hello","World",sep='...',end='!!',flush=True)<jupyter_output>Hello...World!!<jupyter_text>## String Formating
There are lots of methods for formating and manipulating strings built into python. Some of these are illustrated here.
String concatenation is the "addition" of two strings. Observe that while concatenating there will be no space between the strings.<jupyter_code>string1='World'
string2='!'
print('Hello' + " " + string1 + string2)<jupyter_output>Hello World!
<jupyter_text>The `%` operator is used to format a string inserting the value that comes after. It relies on the string containing a format specifier that identifies where to insert the value. The most common types of format specifiers are:
- %s -> string
- %d -> Integer
- %f -> Float
- %o -> Octal
- %x -> Hexadecimal
- %e -> exponential
These will be very familiar to anyone who has ever written a C or Java program and follow nearly exactly the same rules as the [`printf()`](https://en.wikipedia.org/wiki/Printf_format_string) function.<jupyter_code>print("Hello %s" % string1)
print("Actual Number = %d" %18)
print("Float of the number = %f" %18)
print("Octal equivalent of the number = %o" %18)
print("Hexadecimal equivalent of the number = %x" %18)
print("Exponential equivalent of the number = %e" %18)<jupyter_output>Hello World
Actual Number = 18
Float of the number = 18.000000
Octal equivalent of the number = 22
Hexadecimal equivalent of the number = 12
Exponential equivalent of the number = 1.800000e+01
<jupyter_text>When referring to multiple variables parentheses is used. Values are inserted in the order they appear in the parantheses (more on tuples in the next section)<jupyter_code>print("Hello %s %s. This meaning of life is %d" %(string1,string2,42))<jupyter_output><empty_output><jupyter_text>We can also specify the width of the field and the number of decimal places to be used. For example:<jupyter_code>print('Print width 10: |%10s|'%'x')
print('Print width 10: |%-10s|'%'x') # left justified
print("The number pi = %.2f to 2 decimal places"%3.1415)
print("More space pi = %10.2f"%3.1415)
print("Pad pi with 0 = %010.2f"%3.1415) # pad with zeros<jupyter_output>Print width 10: | x|
Print width 10: |x |
The number pi = 3.14 to 2 decimal places
More space pi = 3.14
Pad pi with 0 = 0000003.14
<jupyter_text>## Other String MethodsMultiplying a string by an integer simply repeats it<jupyter_code>print("Hello World! "*5)<jupyter_output>Hello World! Hello World! Hello World! Hello World! Hello World!
<jupyter_text>#### Formatting
Strings can be tranformed by a variety of functions that are all methods on a string. That is they are called by putting the function name with a `.` after the string. They include:
* Upper vs lower case: `upper()`, `lower()`, `captialize()`, `title()` and `swapcase()` with mostly the obvious meaning. Note that `capitalize` makes the first letter of the string a capital only, while `title` selects upper case for the first letter of every word.
* Padding strings: `center(n)`, `ljust(n)` and `rjust(n)` each place the string into a longer string of length n padded by spaces (centered, left-justified or right-justified respectively). `zfill(n)` works similarly but pads with leading zeros.
* Stripping strings: Often we want to remove spaces, this is achived with the functions `strip()`, `lstrip()`, and `rstrip()` respectively to remove from spaces from the both end, just left or just the right respectively. An optional argument can be used to list a set of other characters to be removed.<jupyter_code>s="heLLo wORLd!"
print(s.capitalize(),"vs",s.title())
print("upper: '%s'"%s.upper(),"lower: '%s'"%s.lower(),"and swapped: '%s'"%s.swapcase())
print('|%s|' % "Hello World".center(30)) # center in 30 characters
print('|%s|'% " lots of space ".strip()) # remove leading and trailing whitespace
print('%s without leading/trailing d,h,L or ! = |%s|',s.strip("dhL!"))
print("Hello World".replace("World","Class"))<jupyter_output>Hello world! vs Hello World!
upper: 'HELLO WORLD!' lower: 'hello world!' and swapped: 'HEllO WorlD!'
| Hello World |
|lots of space|
%s without leading/trailing d,h,L or ! = |%s| eLLo wOR
Hello Class
<jupyter_text>#### Inspecting Strings
There are also lost of ways to inspect or check strings. Examples of a few of these are given here:
* Checking the start or end of a string: `startswith("string")` and `endswith("string")` checks if it starts/ends with the string given as argument
* Capitalisation: There are boolean counterparts for all forms of capitalisation, such as `isupper()`, `islower()` and `istitle()`
* Character type: does the string only contain the characters
* 0-9: `isdecimal()`. Note there is also `isnumeric()` and `isdigit()` which are effectively the same function except for certain unicode characters
* a-zA-Z: `isalpha()` or combined with digits: `isalnum()`
* non-control code: `isprintable()` accepts anything except '\n' an other ASCII control codes
* \t\n \r (white space characters): `isspace()`
* Suitable as variable name: `isidentifier()`
* Find elements of string: `s.count(w)` finds the number of times w occurs in s, while `s.find(w)` and `s.rfind(w)` find the first and last position of the string w in s.
<jupyter_code>s="Hello World"
print("The length of '%s' is"%s,len(s),"characters") # len() gives length
s.startswith("Hello") and s.endswith("World") # check start/end
# count strings
print("There are %d 'l's but only %d World in %s" % (s.count('l'),s.count('World'),s))
print('"el" is at index',s.find('el'),"in",s) #index from 0 or -1<jupyter_output>The length of 'Hello World' is 11 characters
There are 3 'l's but only 1 World in Hello World
"el" is at index 1 in Hello World
<jupyter_text>## String comparison operations
Strings can be compared in lexicographical order with the usual comparisons. In addition the `in` operator checks for substrings:<jupyter_code>'abc' < 'bbc' <= 'bbc'
"ABC" in "This is the ABC of Python"<jupyter_output><empty_output><jupyter_text>## Accessing parts of stringsStrings can be indexed with square brackets. Indexing starts from zero in Python. And the `len()` function provides the length of a string<jupyter_code>s = '123456789'
print("The string '%s' string is %d characters long" % (s, len(s)) )
print('First character of',s,'is',s[0])
print('Last character of',s,'is',s[len(s)-1])<jupyter_output>The string '123456789' string is 9 characters long
First character of 123456789 is 1
Last character of 123456789 is 9
<jupyter_text>Negative indices can be used to start counting from the back<jupyter_code>print('First character of',s,'is',s[-len(s)])
print('Last character of',s,'is',s[-1])<jupyter_output>First character of 123456789 is 1
Last character of 123456789 is 9
<jupyter_text>Finally a substring (range of characters) an be specified as using $a:b$ to specify the characters at index $a,a+1,\ldots,b-1$. Note that the last charcter is *not* included.<jupyter_code>print("First three characters",s[0:3])
print("Next three characters",s[3:6])<jupyter_output>First three characters 123
Next three characters 456
<jupyter_text>An empty beginning and end of the range denotes the beginning/end of the string:<jupyter_code>print("First three characters", s[:3])
print("Last three characters", s[-3:])<jupyter_output>First three characters 123
Last three characters 789
<jupyter_text>#### Breaking appart strings
When processing text, the ability to split strings appart is particularly useful.
* `partition(separator)`: breaks a string into three parts based on a separator
* `split()`: breaks string into words separated by white-space (optionally takes a separator as argument)
* `join()`: joins the result of a split using string as separator<jupyter_code>s = "one -> two -> three"
print( s.partition("->") )
print( s.split() )
print( s.split(" -> ") )
print( ";".join( s.split(" -> ") ) )<jupyter_output>('one ', '->', ' two -> three')
['one', '->', 'two', '->', 'three']
['one', 'two ', ' three']
one;two ; three
<jupyter_text>## Strings are immutable
It is important that strings are constant, immutable values in Python. While new strings can easily be created it is not possible to modify a string:<jupyter_code>s='012345'
sX=s[:2]+'X'+s[3:] # this creates a new string with 2 replaced by X
print("creating new string",sX,"OK")
sX=s.replace('2','X') # the same thing
print(sX,"still OK")
s[2] = 'X' # an error!!!<jupyter_output>creating new string 01X345 OK
01X345 still OK
<jupyter_text>### Built-in Functions
**find( )** function returns the index value of the given data that is to found in the string. If it is not found it returns -1. Remember to not confuse the returned -1 for reverse indexing value.<jupyter_code>print(String0.find('io'))
print(String0.find('in'))<jupyter_output>3
20
<jupyter_text>The index value returned is the index of the first element in the input data.<jupyter_code>print(String0[7])<jupyter_output>e
<jupyter_text>One can also input **find( )** function between which index values it has to search.<jupyter_code>print(String0.find('j',1))
print(String0.find('j',1,3))<jupyter_output>-1
-1
<jupyter_text>**capitalize( )** is used to capitalize the first element in the string.<jupyter_code>String3 = 'observe the first letter in this sentence.'
print(String3.capitalize())<jupyter_output>Observe the first letter in this sentence.
<jupyter_text>**center( )** is used to center align the string by specifying the field width.<jupyter_code>String0.center(70)<jupyter_output><empty_output><jupyter_text>One can also fill the left out spaces with any other character.<jupyter_code>String0.center(70,'-')<jupyter_output><empty_output><jupyter_text>**zfill( )** is used for zero padding by specifying the field width.<jupyter_code>String0.zfill(30)<jupyter_output><empty_output><jupyter_text>**expandtabs( )** allows you to change the spacing of the tab character. '\t' which is by default set to 8 spaces.<jupyter_code>s = 'h\te\tl\tl\to'
print(s)
print(s.expandtabs(1))
print(s.expandtabs())<jupyter_output>h e l l o
h e l l o
h e l l o
<jupyter_text>**index( )** works the same way as **find( )** function the only difference is find returns '-1' when the input element is not found in the string but **index( )** function throws a ValueError<jupyter_code>print(String0.index('Rio'))
print(String0.index('Janeiro',0))
print(String0.index('Janeiro',12,20))<jupyter_output>2
9
<jupyter_text>**endswith( )** function is used to check if the given string ends with the particular char which is given as input.<jupyter_code>print(String0.endswith('y'))<jupyter_output>False
<jupyter_text>The start and stop index values can also be specified.<jupyter_code>print(String0.endswith('o',0))
print(String0.endswith('R',0,3))<jupyter_output>True
True
<jupyter_text>**join( )** function is used add a char in between the elements of the input string.<jupyter_code>'a'.join('*_-')<jupyter_output><empty_output><jupyter_text>'*_-' is the input string and char 'a' is added in between each element**join( )** function can also be used to convert a list into a string.<jupyter_code>a = list(String0)
print(a)
b = ''.join(a)
print(b)<jupyter_output>['O', ' ', 'R', 'i', 'o', ' ', 'd', 'e', ' ', 'J', 'a', 'n', 'e', 'i', 'r', 'o', ' ', 'é', ' ', 'l', 'i', 'n', 'd', 'o']
O Rio de Janeiro é lindo
<jupyter_text>Before converting it into a string **join( )** function can be used to insert any char in between the list elements.<jupyter_code>c = '/'.join(a)[33:]
print(c)<jupyter_output>/é/ /l/i/n/d/o
<jupyter_text>**split( )** function is used to convert a string back to a list. Think of it as the opposite of the **join()** function.<jupyter_code>d = c.split('/')
print(d)<jupyter_output>['', 'é', ' ', 'l', 'i', 'n', 'd', 'o']
<jupyter_text>In **split( )** function one can also specify the number of times you want to split the string or the number of elements the new returned list should conatin. The number of elements is always one more than the specified number this is because it is split the number of times specified.<jupyter_code>e = c.split('/',3)
print(e)
print(len(e))<jupyter_output>['', 'é', ' ', 'l/i/n/d/o']
4
<jupyter_text>String Indexing and Slicing are similar to Lists which was explained in detail earlier.<jupyter_code>print(a[4])
print(a[4:])<jupyter_output><empty_output><jupyter_text>**lower( )** converts any capital letter to small letter.<jupyter_code>print(String0)
print(String0.lower())<jupyter_output>O Rio de Janeiro é lindo
o rio de janeiro é lindo
<jupyter_text>**upper( )** converts any small letter to capital letter.<jupyter_code>String0.upper()<jupyter_output><empty_output><jupyter_text>**replace( )** function replaces the element with another element.<jupyter_code>String0.replace('O Rio de Janeiro','São Paulo')<jupyter_output><empty_output><jupyter_text>**strip( )** function is used to delete elements from the right end and the left end which is not required.<jupyter_code>f = ' hello '<jupyter_output><empty_output><jupyter_text>If no char is specified then it will delete all the spaces that is present in the right and left hand side of the data.<jupyter_code>f.strip()<jupyter_output><empty_output><jupyter_text>**strip( )** function, when a char is specified then it deletes that char if it is present in the two ends of the specified string.<jupyter_code>f = ' ***----hello---******* '
f.strip('*')<jupyter_output><empty_output><jupyter_text>The asterisk had to be deleted but is not. This is because there is a space in both the right and left hand side. So in strip function. The characters need to be inputted in the specific order in which they are present.<jupyter_code>print(f.strip(' *'))
print(f.strip(' *-'))<jupyter_output>----hello---
hello
<jupyter_text>**lstrip( )** and **rstrip( )** function have the same functionality as strip function but the only difference is **lstrip( )** deletes only towards the left side and **rstrip( )** towards the right.<jupyter_code>print(f.lstrip(' *'))
print(f.rstrip(' *'))<jupyter_output>----hello---*******
***----hello---
|
no_license
|
/notebooks/02-DataTypes.ipynb
|
marianasoeiro/PythonIntro
| 43 |
<jupyter_start><jupyter_text>### MY470 Computer Programming
# Programming in Teams
### Week 5 Lab## Classes<jupyter_code>class MyClass(object):
"""DocString to define class."""
# DocString, not comments. DocStrings are for users, and will travel with the class.
# Information about the implemention (not abstraction) giving concrete meaning
# to the abstract methods.
# Special Method: Always FIRST
def __init__(self, vals):
"""Create a new instance from class."""
# "__" is two underscores.
# A constructor method is a special function that creates an instance of the class.
# Any initalising you would like to do with your class object.
# Data attributes are defined here with the "self." prefix
self.vals = vals
def class_methods(self, arg):
"""DocString to describe method."""
# Some methods that do something. Get() and set() methods are common.
# Self is used to represent the instance of the class.
# When working out the structure we can originally just leave "pass".
pass
# Special Method: Always LAST
def __str__(self):
"""Return a string representation of object."""
# Define here how you want the Class Instance to be printed.
# Otherwise, you will get the location (where the Class Instance is stored)
return sorted(self.vals)
<jupyter_output><empty_output><jupyter_text>## Iterables
When you create a list, you can read its items one by one. Reading its items one by one is called iteration.<jupyter_code>mylist = [x*x for x in range(3)]
for i in mylist:
print(i)<jupyter_output>0
1
4
<jupyter_text>`mylist` is an iterable. When you use a list comprehension, you create a list, and so an iterable. Everything you can use ```for... in...``` on is an iterable: lists, strings, collections, files...
These iterables are handy because you can read them as much as you wish. However, you store all the values in memory and this is not always what you want when you have a lot of values.
## Generators
[Source](https://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do)
Generators are iterators, a kind of iterable **you can only iterate over once**. Generators do not store all the values in memory, they generate the values on the fly.
<jupyter_code>mygenerator = (x*x for x in range(3))
for i in mygenerator:
print(i)<jupyter_output>0
1
4
<jupyter_text>The syntax is the same except you used ```()``` instead of ```[]```. BUT, you cannot perform ```for i in mygenerator``` a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one.<jupyter_code>for i in mygenerator:
print(i)<jupyter_output><empty_output><jupyter_text>## Yield
```yield``` is a keyword that is used like ```return```, except the function will return a generator.
<jupyter_code>def createGenerator():
mylist = range(3)
for i in mylist:
yield i*i
mygenerator = createGenerator() # create a generator
print(mygenerator) # mygenerator is an object!
for i in mygenerator:
print(i)<jupyter_output><generator object createGenerator at 0x7fd1e0a59ed0>
0
1
4
<jupyter_text>When you call the function, the code you have written in the function body does not run. The function only returns the generator object.
**Then, your code will continue from where it left off each time ```for``` uses the generator.**<jupyter_code>def my_gen():
n = 1
print('This is printed first')
# Generator function contains yield statements
yield n
n += 1
print('This is printed second')
yield n
n += 1
print('This is printed at last')
yield n
# It returns an object but does not start execution immediately.
a = my_gen()
# We can iterate through the items using next().
next(a)
# Once the function yields, the function is paused and the control is transferred to the caller.
# Local variables and their states are remembered between successive calls.
print(next(a))
print(next(a))
# Finally, when the function terminates, StopIteration is raised automatically on further calls.
# Generators can only be called once.
next(a)<jupyter_output><empty_output><jupyter_text>
So ...
- The first time ```for``` calls the generator object created from your function, it will run the code in your function from the beginning until it hits ```yield```, then it'll return the first value of the loop.
- Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting ```yield```.
- That can be because the loop has come to an end, or because you no longer satisfy an ```if / else``` statement.## Suggested Workflow for Prorgamming in Teams
1. Use `pass` to outline structure of program
2. Write class/method/function specifications
3. Split the work
4. Fill in the details
5. Review each other's code to identify and fix potential problems
6. Merge
7. Identify bugs and problems
8. Repeat 3-7 as necessary## GitHub as a Collaboration Tool
1. Clone repository locally
2. Make changes in cloned repository
3. Upload new file, **creating a new branch and starting a pull request**
## Create a New Branch When Uploading Changed File
## Open a Pull Request to Get Feedback (Automatically Directed There)
## You (and Later Your Partner) Can View Your Changes Highlighted
## Wait for Comments from Partner Before Merging
## Confirm Merge
## Open Issues to Discuss Problems and Ask Partner for Help
## Open Issues to Discuss Problems and Ask Partner for Help
## Discuss
<jupyter_code># Exercise 1: Work with the person next to you to design
# classes to manage the products, customers, and purchase
# orders for an online book store such as amazon.com.
# Outline the data attributes and useful methods for
# each class. You can discuss and create the outline together.
# Exercise 2: Create a new repository in your account and upload
# the class hierarchy you created. Practice cloning locally,
# making changes, and uploading the changed file to your remote
# repository by creating a branch and opening a pull request.
# (Note that in order for others to be able to upload files
# to your repository, you need to grant them push access.)
# Exercise 3: Open a new issue in your partner's repository.
# (Note that you don't need push access to open issues.)
<jupyter_output><empty_output>
|
no_license
|
/wk5/MY470_wk5_class.ipynb
|
lse-my470/lectures
| 7 |
<jupyter_start><jupyter_text>
# Lab | Map, Reduce, Filter
## Introduction
In this lab, we will implement what we have learned about functional programming using the map, reduce, and filter functions. These functions allow us to pass an input and a transformation to a function and produce an output.
## Getting Started
Follow the instructions and add your code and explanations as necessary. By the end of this lab, you will have learned about mapping, reducing, and filtering as well as applying functions in Pandas.
## Resources
[The Official Python Documentation on Mapping, Reducing, and Filtering](https://docs.python.org/3/howto/functional.html#built-in-functions)
[The `apply` Function in Pandas](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html)
# Before your start:
- Comment as much as you can
- Happy learning!<jupyter_code># Import reduce from functools, numpy and pandas
import pandas as pd
import numpy as np
from functools import reduce<jupyter_output><empty_output><jupyter_text># Challenge 1 - Mapping
#### We will use the map function to clean up words in a book.
In the following cell, we will read a text file containing the book The Prophet by Khalil Gibran.<jupyter_code># Run this code:
location = '58585-0.txt'
with open(location, 'r', encoding="utf8") as f:
prophet = f.read().split(' ')<jupyter_output><empty_output><jupyter_text>#### Let's remove the first 568 words since they contain information about the book but are not part of the book itself.
Do this by removing from `prophet` elements 0 through 567 of the list (you can also do this by keeping elements 568 through the last element).<jupyter_code># your code here
to_remove = prophet[0:567]
for i in to_remove:
prophet.remove(i)<jupyter_output><empty_output><jupyter_text>If you look through the words, you will find that many words have a reference attached to them. For example, let's look at words 1 through 10.
Expected output:
````python
['PROPHET\n\n|Almustafa,',
'the{7}',
'chosen',
'and',
'the\nbeloved,',
'who',
'was',
'a',
'dawn',
'unto']
````<jupyter_code># your code here
prophet[1:10]<jupyter_output><empty_output><jupyter_text>#### The next step is to create a function that will remove references.
We will do this by splitting the string on the `{` character and keeping only the part before this character. Write your function below.<jupyter_code>def reference(x):
'''
Input: A string
Output: The string with references removed
Example:
Input: 'the{7}'
Output: 'the'
'''
# your code here
return x.split('{')[0]<jupyter_output><empty_output><jupyter_text>Now that we have our function, use the `map()` function to apply this function to our book, The Prophet. Return the resulting list to a new list called `prophet_reference`.<jupyter_code># your code here
prophet_reference = list(map(reference,prophet))<jupyter_output><empty_output><jupyter_text>Another thing you may have noticed is that some words contain a line break. Let's write a function to split those words. Our function will return the string split on the character `\n`. Write your function in the cell below.<jupyter_code>def line_break(x):
'''
Input: A string
Output: A list of strings split on the line break (\n) character
Example:
Input: 'the\nbeloved'
Output: ['the', 'beloved']
'''
# your code here
return x.split('\n')<jupyter_output><empty_output><jupyter_text>Apply the `line_break` function to the `prophet_reference` list. Name the new list `prophet_line`.<jupyter_code># your code here
prophet_line = list(map(line_break,prophet_reference))<jupyter_output><empty_output><jupyter_text>If you look at the elements of `prophet_line`, you will see that the function returned lists and not strings. Our list is now a list of lists. Flatten the list using list comprehension. Assign this new list to `prophet_flat`.<jupyter_code># your code here
prophet_flat = []
for i in prophet_line:
for aux in i:
prophet_flat.append(aux)<jupyter_output><empty_output><jupyter_text># Challenge 2 - Filtering
When printing out a few words from the book, we see that there are words that we may not want to keep if we choose to analyze the corpus of text. Below is a list of words that we would like to get rid of. Create a function that will return false if it contains a word from the list of words specified and true otherwise.<jupyter_code>def word_filter(x):
'''
Input: A string
Output: True if the word is not in the specified list
and False if the word is in the list.
Example:
word list = ['and', 'the']
Input: 'and'
Output: False
Input: 'John'
Output: True
'''
word_list = ['and', 'the', 'a', 'an']
# your code here
if x in word_list:
return False
else:
return True<jupyter_output><empty_output><jupyter_text>Use the `filter()` function to filter out the words speficied in the `word_filter()` function. Store the filtered list in the variable `prophet_filter`.<jupyter_code>prophet_filter = list(filter(word_filter,prophet_flat))<jupyter_output><empty_output><jupyter_text># Bonus Challenge
Rewrite the `word_filter` function above to not be case sensitive.<jupyter_code>def word_filter_case(x):
word_list = ['and', 'the', 'a', 'an']
# your code here
if x.lower() in word_list:
return False
else:
return True
prophet_filter = list(filter(word_filter_case,prophet_flat))<jupyter_output><empty_output><jupyter_text># Challenge 3 - Reducing
#### Now that we have significantly cleaned up our text corpus, let's use the `reduce()` function to put the words back together into one long string separated by spaces.
We will start by writing a function that takes two strings and concatenates them together with a space between the two strings.<jupyter_code>def concat_space(a, b):
'''
Input:Two strings
Output: A single string separated by a space
Example:
Input: 'John', 'Smith'
Output: 'John Smith'
'''
# your code here
return a + " " + b<jupyter_output><empty_output><jupyter_text>Use the function above to reduce the text corpus in the list `prophet_filter` into a single string. Assign this new string to the variable `prophet_string`.<jupyter_code># your code here
prophet_string = reduce(concat_space,prophet_filter)
prophet_string<jupyter_output><empty_output><jupyter_text># Challenge 4 - Applying Functions to DataFrames
#### Our next step is to use the apply function to a dataframe and transform all cells.
To do this, we will connect to Ironhack's database and retrieve the data from the *pollution* database. Select the *beijing_pollution* table and retrieve its data. The data is also available at https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data#<jupyter_code># your code here
data = pd.read_csv('beijing_pollution.csv')<jupyter_output><empty_output><jupyter_text>Let's look at the data using the `head()` function.
Expected output:
>
>| | No | year | month | day | hour | pm2.5 | DEWP | TEMP | PRES | cbwd | Iws | Is | Ir |
|---:|-----:|-------:|--------:|------:|-------:|--------:|-------:|-------:|-------:|:-------|------:|-----:|-----:|
| 0 | 1 | 2010 | 1 | 1 | 0 | nan | -21 | -11 | 1021 | NW | 1.79 | 0 | 0 |
| 1 | 2 | 2010 | 1 | 1 | 1 | nan | -21 | -12 | 1020 | NW | 4.92 | 0 | 0 |
| 2 | 3 | 2010 | 1 | 1 | 2 | nan | -21 | -11 | 1019 | NW | 6.71 | 0 | 0 |
| 3 | 4 | 2010 | 1 | 1 | 3 | nan | -21 | -14 | 1019 | NW | 9.84 | 0 | 0 |
| 4 | 5 | 2010 | 1 | 1 | 4 | nan | -20 | -12 | 1018 | NW | 12.97 | 0 | 0 |<jupyter_code># your code here
data.head()<jupyter_output><empty_output><jupyter_text>The next step is to create a function that divides a cell by 24 to produce an hourly figure. Write the function below.<jupyter_code>def hourly(x):
'''
Input: A numerical value
Output: The value divided by 24
Example:
Input: 48
Output: 2.0
'''
# your code here
return x/24<jupyter_output><empty_output><jupyter_text>Apply this function to the columns `Iws`, `Is`, and `Ir`. Store this new dataframe in the variable `pm25_hourly`.<jupyter_code># your code here
pm25_hourly = data[['Iws','Is','Ir']].apply(hourly)<jupyter_output><empty_output><jupyter_text>#### Our last challenge will be to create an aggregate function and apply it to a select group of columns in our dataframe.
Write a function that returns the standard deviation of a column divided by the length of a column minus 1. Since we are using pandas, do not use the `len()` function. One alternative is to use `count()`. Also, use the numpy version of standard deviation.<jupyter_code>def sample_sd(x):
'''
Input: A Pandas series of values
Output: the standard deviation divided by the number of elements in the series
Example:
Input: pd.Series([1,2,3,4])
Output: 0.3726779962
'''
# your code here
return np.std(x)/(x.count()-1)
pm25_sample = data[['Iws']].apply(sample_sd)<jupyter_output><empty_output>
|
no_license
|
/14-Map_Reduce_Filter/main.ipynb
|
carolineferguson/data-labs
| 19 |
<jupyter_start><jupyter_text># 4. Neural Style Transfer on AKS
Now that the AKS cluster is up, we need to deploy our __flask app__ and __scoring app__ onto it.
To do so, we'll do the following:
1. Build our __flask app__ and __scoring app__ push it to Dockerhub
2. Create our dot-yaml files for each of these apps (these dot-yaml files will need to have the proper configuration for the pods to use blobfuse to access our blob storage container). We should end up creating: `flask_app_deployment.json` and `scoring_app_deployment.json`
3. Use `kubectl` to make these deployments to our AKS cluster
4. Expose the __flask app__ REST endpoint so that it can be accessed externally
### Kubernetes Deployment
In this notebook, we will deploy our __flask app__ and __scoring app__ on the kubernetes cluster. Since the __flask app__ does not require heavy computation, we will deploy it on one node and reserve the remaining nodes for the __scoring app__ as it will perform the parallel computation.---### Import packages and load .env<jupyter_code>from dotenv import set_key, get_key, find_dotenv, load_dotenv
from pathlib import Path
import subprocess
import json
import os
env_path = find_dotenv(raise_error_if_not_found=True)
load_dotenv(env_path)<jupyter_output><empty_output><jupyter_text>### Build Scoring App Docker Image<jupyter_code>%%writefile scoring_app/requirements.txt
azure==4.0.0
torch==0.4.1
torchvision==0.2.1
%%writefile scoring_app/Dockerfile
FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
RUN echo "deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
cmake \
curl \
git \
nginx \
supervisor \
wget && \
rm -rf /var/lib/apt/lists/*
ENV PYTHON_VERSION=3.6
RUN curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda create -y --name py$PYTHON_VERSION python=$PYTHON_VERSION && \
/opt/conda/bin/conda clean -ya
ENV PATH /opt/conda/envs/py$PYTHON_VERSION/bin:$PATH
ENV LD_LIBRARY_PATH /opt/conda/envs/py$PYTHON_VERSION/lib:/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
ENV PYTHONPATH /code/:$PYTHONPATH
RUN mkdir /app
WORKDIR /app
ADD process_images_from_queue.py /app
ADD style_transfer.py /app
ADD main.py /app
ADD util.py /app
ADD requirements.txt /app
ADD azure.py /app
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "main.py"]
!sudo docker build -t {get_key(env_path, "SCORING_IMAGE")} scoring_app<jupyter_output>Sending build context to Docker daemon 35.84kB
Step 1/17 : FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
---> 7e8410ba243b
Step 2/17 : RUN echo "deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list
---> Using cache
---> 2aee8c10151f
Step 3/17 : RUN apt-get update && apt-get install -y --no-install-recommends build-essential ca-certificates cmake curl git nginx supervisor wget && rm -rf /var/lib/apt/lists/*
---> Using cache
---> c69b13e821b7
Step 4/17 : ENV PYTHON_VERSION=3.6
---> Using cache
---> 9c3490d15c2d
Step 5/17 : RUN curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && chmod +x ~/miniconda.sh && ~/miniconda.sh -b -p /opt/conda && rm ~/miniconda.sh && /opt/conda/bin/conda create -y --name py$PYTHON_VERSION python=$PYTHON_VERSION && /opt/conda/bin/conda c[...]<jupyter_text>Tag and push docker image<jupyter_code>!sudo docker login --username pjh177787 --password 'Hans&951022'
repo = "{}/{}".format(get_key(env_path, "DOCKER_LOGIN"), get_key(env_path, "SCORING_IMAGE"))
!sudo docker tag {get_key(env_path, "SCORING_IMAGE")} {repo}
!sudo docker push {repo}<jupyter_output>The push refers to repository [docker.io/pjh177787/oxford_scoring_app]
[1B34c46d98: Preparing
[1B112a59d2: Preparing
[1Bb59fe06d: Preparing
[1B93a511e9: Preparing
[1B34dc443b: Preparing
[1B711df71e: Preparing
[1B0c6970e5: Preparing
[1Be82df0c2: Preparing
[1B89b11fc2: Preparing
[1Bb7e6614f: Preparing
[1Bc5838665: Preparing
[1B7ea5d26b: Preparing
[1B7acf624e: Preparing
[1Ba0fe4fdd: Preparing
[1B1c46eb92: Preparing
[1B6e800c43: Preparing
[1Bf22d44f3: Preparing
[1B6f329a25: Preparing
[1B7de5faec: Preparing
[1Ba27b0484: Layer already exists K[18A[1K[K[15A[1K[K[12A[1K[K[14A[1K[K[11A[1K[K[10A[1K[K[8A[1K[K[9A[1K[K[13A[1K[K[7A[1K[K[6A[1K[K[5A[1K[K[2A[1K[K[3A[1K[K[1A[1K[Klatest: digest: sha256:1ee0eb0937c29ab45041fb6b6e3e6fba3f9441b1172410fa8d27c409455bfd8f size: 4507
<jupyter_text>### Build Flask App Docker ImageCreate our Dockerfile and save it to the directory, `flask_app/`.<jupyter_code>%%writefile flask_app/Dockerfile
FROM continuumio/miniconda3
RUN mkdir /app
WORKDIR /app
ADD add_images_to_queue.py /app
ADD preprocess.py /app
ADD postprocess.py /app
ADD util.py /app
ADD main.py /app
ADD azure.py /app
RUN conda install -c conda-forge -y ffmpeg
RUN pip install azure
RUN pip install flask
CMD ["python", "main.py"]<jupyter_output>Overwriting flask_app/Dockerfile
<jupyter_text>Build the Docker image<jupyter_code>!sudo docker build -t {get_key(env_path, "FLASK_IMAGE")} flask_app<jupyter_output>Sending build context to Docker daemon 24.06kB
Step 1/12 : FROM continuumio/miniconda3
---> 6b5cf97566c3
Step 2/12 : RUN mkdir /app
---> Using cache
---> 3e41fbdb3278
Step 3/12 : WORKDIR /app
---> Using cache
---> 032cc5cbe3da
Step 4/12 : ADD add_images_to_queue.py /app
---> Using cache
---> a7aad3e80c4e
Step 5/12 : ADD preprocess.py /app
---> Using cache
---> 88ec5fb7fcf2
Step 6/12 : ADD postprocess.py /app
---> Using cache
---> 796d0a2feade
Step 7/12 : ADD util.py /app
---> Using cache
---> 842a56dcdc61
Step 8/12 : ADD main.py /app
---> Using cache
---> c01786b74db0
Step 9/12 : RUN conda install -c conda-forge -y ffmpeg
---> Using cache
---> 44813df7c13c
Step 10/12 : RUN pip install azure
---> Using cache
---> 3f610cf3cffd
Step 11/12 : RUN pip install flask
---> Using cache
---> 1391f94ad253
Step 12/12 : CMD ["python", "main.py"]
---> Using cache
---> 8fd611bb40dc
Successfully built 8fd611bb40dc
Successfully tagged oxford_flask_app:latest
<jupyter_text>Tag and push.<jupyter_code>repo = "{}/{}".format(get_key(env_path, "DOCKER_LOGIN"), get_key(env_path, "FLASK_IMAGE"))
!sudo docker tag {get_key(env_path, "FLASK_IMAGE")} {repo}
!sudo docker push {repo}<jupyter_output>The push refers to repository [docker.io/pjh177787/oxford_flask_app]
[1B46fb0425: Preparing
[1Bdbdc848b: Preparing
[1B671269e9: Preparing
[1B2767c2e6: Preparing
[1Ba5e2b056: Preparing
[1Bfe3428e9: Preparing
[1B55a10f32: Preparing
[2B55a10f32: Waiting g
[1B66549fba: Preparing
[1Bc65c8dc4: Preparing
[1B2f5d7ee9: Preparing
[2B2f5d7ee9: Waiting g
[1B1ff9ade6: Preparing
[2B1ff9ade6: Layer already exists K[13A[1K[K[11A[1K[K[10A[1K[K[9A[1K[K[8A[1K[K[7A[1K[K[5A[1K[K[4A[1K[K[3A[1K[K[1A[1K[K[2A[1K[Klatest: digest: sha256:665578e4e83eb869241d779113116ebadc84e0446c8b76bfaf9710db561d479a size: 3249
<jupyter_text>### Create our Flask App and Scoring App deployments on AKSWe need to deploy both our aci and aks docker images to the AKS cluster. Since we'll need to set up our gpu and drivers and blobfuse mount point for both deployments, we'll set these up first:<jupyter_code>volume_mounts = [
# {"name": "nvidia", "mountPath": "/usr/local/nvidia"},
# {"name": "blob", "mountPath": get_key(env_path, "MOUNT_DIR")},
]
resources = {
# "requests": {"alpha.kubernetes.io/nvidia-gpu": 1},
# "limits": {"alpha.kubernetes.io/nvidia-gpu": 1},
}
volumes = [
# {"name": "nvidia", "hostPath": {"path": "/usr/local/nvidia"}},
# {
# "name": "blob",
# "flexVolume": {
# "driver": "azure/blobfuse",
# "readOnly": False,
# "secretRef": {"name": "blobfusecreds"},
# "options": {
# "container": get_key(env_path, "STORAGE_CONTAINER_NAME"),
# "tmppath": "/tmp/blobfuse",
# "mountoptions": "--file-cache-timeout-in-seconds=120 --use-https=true",
# },
# },
# },
]
env = [
{
"name": "MOUNT_DIR",
"value": get_key(env_path, "MOUNT_DIR")
},
{
"name": "LB_LIBRARY_PATH",
"value": "$LD_LIBRARY_PATH:/usr/local/nvidia/lib64:/opt/conda/envs/py3.6/lib",
},
{
"name": "DP_DISABLE_HEALTHCHECKS",
"value": "xids"
},
{
"name": "STORAGE_MODEL_DIR",
"value": get_key(env_path, "STORAGE_MODEL_DIR")
},
{
"name": "SUBSCRIPTION_ID",
"value": get_key(env_path, "SUBSCRIPTION_ID")
},
{
"name": "RESOURCE_GROUP",
"value": get_key(env_path, "RESOURCE_GROUP")
},
{
"name": "REGION",
"value": get_key(env_path, "REGION")
},
{
"name": "SB_SHARED_ACCESS_KEY_NAME",
"value": get_key(env_path, "SB_SHARED_ACCESS_KEY_NAME")
},
{
"name": "SB_SHARED_ACCESS_KEY_VALUE",
"value": get_key(env_path, "SB_SHARED_ACCESS_KEY_VALUE")
},
{
"name": "SB_NAMESPACE",
"value": get_key(env_path, "SB_NAMESPACE")
},
{
"name": "SB_QUEUE",
"value": get_key(env_path, "SB_QUEUE")
},
]<jupyter_output><empty_output><jupyter_text>Define the aks deployment and save it to a `scoring_app_deployment.json` file using the variables set above.<jupyter_code>scoring_app_deployment_json = {
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "scoring-app",
"labels": {
"purpose": "dequeue_messages_and_apply_style_transfer"
}
},
"spec": {
"replicas": int(get_key(env_path, "NODE_COUNT")) - 1,
"template": {
"metadata": {
"labels": {
"app": "scoring-app"
}
},
"spec": {
"containers": [
{
"name": "scoring-app",
"image": "{}/{}:latest".format(get_key(env_path, "DOCKER_LOGIN"), get_key(env_path, "SCORING_IMAGE")),
"volumeMounts": volume_mounts,
"resources": resources,
"ports": [{
"containerPort": 433
}],
"env": env,
}
],
"volumes": volumes
},
},
},
}
with open("scoring_app_deployment.json", "w") as outfile:
json.dump(scoring_app_deployment_json, outfile, indent=4, sort_keys=True)
outfile.write('\n\n')<jupyter_output><empty_output><jupyter_text>Using the `scoring_app_deployment.json` we created, create our deployment on AKS. This can take a few minutes...<jupyter_code>!kubectl delete -f scoring_app_deployment.json
!kubectl create -f scoring_app_deployment.json<jupyter_output>deployment.apps/scoring-app created
<jupyter_text>Define the flask app deployment and save it to a `flask_app_deployment.json` file using the variables set above.<jupyter_code>flask_app_deployment_json = {
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "flask-app",
"labels": {
"purpose": "pre_and_post_processing_and_queue_images"
}
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "flask-app"
}
},
"spec": {
"containers": [
{
"name": "flask-app",
"image": "{}/{}:latest".format(get_key(env_path, "DOCKER_LOGIN"), get_key(env_path, "FLASK_IMAGE")),
"volumeMounts": volume_mounts,
"resources": resources,
"ports": [{
"containerPort": 8080
}],
"env": env,
}
],
"volumes": volumes
},
},
},
}
with open("flask_app_deployment.json", "w") as outfile:
json.dump(flask_app_deployment_json, outfile, indent=4, sort_keys=True)
outfile.write('\n\n')<jupyter_output><empty_output><jupyter_text>Using the `flask_app_deployment.json` we created, create our flask app deployment on AKS. This can take a few minutes...<jupyter_code>!kubectl delete -f flask_app_deployment.json
!kubectl create -f flask_app_deployment.json<jupyter_output>deployment.apps/flask-app created
<jupyter_text>These deployments may take a few minutes. You can inspect the state of the pods by running the command: `kubectl get pods`. When the deployment is done, the results may look as follows:
```bash
NAME READY STATUS RESTARTS AGE
flask-app-6db66c97ff-x8rq4 1/1 Running 0 78s
scoring-app-846dd6bc79-5nm5b 1/1 Running 0 73s
scoring-app-846dd6bc79-6qc6k 1/1 Running 0 73s
scoring-app-846dd6bc79-8gtsv 1/1 Running 0 73s
scoring-app-846dd6bc79-hjsfc 1/1 Running 0 73s
```<jupyter_code>!kubectl exec scoring-app-7fc7d4cb9d-f7gct -- ls /opt/conda/envs/py3.6/lib/python3.6/site-packages/
!kubectl exec flask-app-66c4887ff8-wgztf -- ls /
!kubectl logs scoring-app-7fc7d4cb9d-f7gct
!kubectl describe pods<jupyter_output>Name: flask-app-66c4887ff8-wgztf
Namespace: default
Priority: 0
Node: aks-nodepool1-80042525-2/10.240.0.6
Start Time: Mon, 15 Jul 2019 09:10:55 +0000
Labels: app=flask-app
pod-template-hash=66c4887ff8
Annotations: <none>
Status: Running
IP: 10.244.4.8
Controlled By: ReplicaSet/flask-app-66c4887ff8
Containers:
flask-app:
Container ID: docker://e4d37554f27a8d3647f2a0cccff0225b2c0a5d9745961e5c9ec69e5815717b0a
Image: pjh177787/oxford_flask_app:latest
Image ID: docker-pullable://pjh177787/oxford_flask_app@sha256:665578e4e83eb869241d779113116ebadc84e0446c8b76bfaf9710db561d479a
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 15 Jul 2019 09:11:01 +0000
Ready: True
Restart Count: 0
Environment:
MOUNT_DIR: /data
LB_LIBRARY_PATH: $LD_LIBRARY_PATH:/usr/local/n[...]<jupyter_text>Expose the flask-app in the kubernetes cluster. This will open a public endpoint.<jupyter_code>!kubectl expose deployment flask-app --type="LoadBalancer"<jupyter_output>service/flask-app exposed
<jupyter_text>Run `!watch kubectl get services` and wait until the external ip goes from pending to being realized. It can take some time.
NOTE: If the following command is run without the external ip being realized, an error will be thrown. <jupyter_code>external_ip = !kubectl get services -o=jsonpath={.items[*].status.loadBalancer.ingress[0].ip}
external_ip = external_ip[0]<jupyter_output><empty_output><jupyter_text>Since we'll use the `external_ip` later on, save it to the dot-env file.<jupyter_code>set_key(env_path, "AKS_EXTERNAL_IP", external_ip)<jupyter_output><empty_output><jupyter_text>### Test that the deployment works end-to-endSet the name of the new test video.<jupyter_code>new_video_name = "aks_test_orangutan.mp4"<jupyter_output><empty_output><jupyter_text>Make a copy the old `orangutan.mp4` video but named with the ``. <jupyter_code>!cp data/orangutan.mp4 data/{new_video_name}<jupyter_output><empty_output><jupyter_text>Use `curl` to hit the endpoint of the kubernetes cluster we just deployed.<jupyter_code>!curl {external_ip}":8080/process?video_name="{new_video_name}<jupyter_output><empty_output><jupyter_text>Inspect your kubernetes cluster to see that the process is running. You can use the commands below to do so. Alternatively, you can also inspect the blob storage container to see that the images are being created.
When the video completes, you can play the video file directly from your mounted blob container:<jupyter_code>%%HTML
<video width="320" height="240" controls>
<source src="data/aks_test_orangutan/aks_test_orangutan_processed.mp4" type="video/mp4">
</video><jupyter_output><empty_output>
|
non_permissive
|
/.ipynb_checkpoints/04_style_transfer_on_aks-checkpoint.ipynb
|
pjh177787/batch-scoring-deep-learning-models-with-aks-cn
| 19 |
<jupyter_start><jupyter_text>Observation:
* From the heat map we can see that the radius, perimeter and area (_mean, _se and worst) are highly corelated. So one or more of these parameters can be used as features for our prediction
* Furthermore, we can see a strong corelation between the Concave points, concavity and compactness (_mean, _se and _worst) and these can also be added as features for the prediction model.
I will continue by further plotting another heatmap in search of features that are not directly linked to each other like 'area,perimeter and mean' but are strongly corelated to each other<jupyter_code>features = ['radius_mean','perimeter_mean','area_mean', 'compactness_mean', 'concavity_mean', 'concave points_mean','radius_se','perimeter_se',
'area_se','compactness_se', 'concavity_se', 'concave points_se',
'radius_worst','perimeter_worst','area_worst','compactness_worst', 'concavity_worst', 'concave points_worst']
corr_ = train_data[features].corr()
plt.figure(figsize=(14,8))
sns.heatmap(corr_, annot=True)<jupyter_output><empty_output><jupyter_text>NB: when looking at the heatmap we can see that the features unrelated to each other with the strongest corelations are the perimeter and the concave points. i will continue by plotting to further investigate the data using different plots.<jupyter_code>features_1 = ['perimeter_mean', 'concave points_mean', 'perimeter_se',
'concave points_se','perimeter_worst','concave points_worst']
# plt.figure(figsize=(50,50))
fig,axis = plt.subplots(nrows=2,ncols=3,figsize=(20,8))
sns.scatterplot("perimeter_mean","concave points_mean", hue='diagnosis', data=train_data, ax=axis[0,0])
sns.swarmplot("perimeter_se","concave points_mean",hue='diagnosis',data=train_data, ax=axis[0,1])
sns.scatterplot("perimeter_worst","concave points_mean",hue='diagnosis',data=train_data, ax=axis[0,2])
sns.swarmplot("diagnosis","perimeter_mean", data=train_data, ax=axis[1,0])
sns.swarmplot("diagnosis","perimeter_se",data=train_data, ax=axis[1,1])
sns.swarmplot("diagnosis","perimeter_worst",data=train_data, ax=axis[1,2])
<jupyter_output><empty_output>
|
no_license
|
/Breast_Cancer/Untitled.ipynb
|
Anosike-CK/CancerType_Prediction
| 2 |
<jupyter_start><jupyter_text># Mixed Linear Model <jupyter_code>import warnings
warnings.filterwarnings('ignore')<jupyter_output><empty_output><jupyter_text>## Import libraries <jupyter_code>import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from statsmodels.regression.mixed_linear_model import MixedLM
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from sklearn.model_selection import cross_validate
from sklearn.model_selection import RepeatedKFold
import statsmodels.api as sm
import scipy.stats as stats
dta = pd.read_csv('https://raw.githubusercontent.com/CharlotteJames/ed-forecast/main/data/master_scaled_new.csv',
index_col=0)
dta.columns = ['_'.join([c.split('/')[0],c.split('/')[-1]]) if '/' in c else c for c in dta.columns]
dta.ccg.unique().shape<jupyter_output><empty_output><jupyter_text>## Add random feature<jupyter_code># Adding random features
rng = np.random.RandomState(0)
rand_var = rng.rand(dta.shape[0])
dta['rand1'] = rand_var
dta.shape<jupyter_output><empty_output><jupyter_text>## Train test split <jupyter_code>train, test = train_test_split(dta,random_state=29)
y_train = train['ae_attendances_attendances'].values
X_train = train.drop(['ae_attendances_attendances','ccg','month'], axis=1)
y_test = test['ae_attendances_attendances'].values
X_test = test.drop(['ae_attendances_attendances','ccg','month'], axis=1)<jupyter_output><empty_output><jupyter_text>## Model ### Cross-validate <jupyter_code>cv = RepeatedKFold(n_splits=5, n_repeats=5, random_state=1)
scores_train, scores_test = [],[]
y = dta['ae_attendances_attendances']
X = dta.drop(['ae_attendances_attendances','month', 'year'], axis=1)
for train_index, test_index in cv.split(X, y):
model = MixedLM(endog=y.iloc[train_index].values,
exog = X.iloc[train_index].drop(['ccg'],axis=1).values,
groups=X.iloc[train_index].ccg.values)
ml_fit = model.fit()
y_pred_test = model.predict(ml_fit.fe_params,
exog=X.iloc[test_index].drop(['ccg'],axis=1).values)
y_pred_train = model.predict(ml_fit.fe_params,
exog=X.iloc[train_index].drop(['ccg'],axis=1).values)
scores_test.append(r2_score(y.iloc[test_index],y_pred_test))
scores_train.append(r2_score(y.iloc[train_index],y_pred_train))
res=pd.DataFrame()
res['test_score'] = scores_test
res['train_score'] = scores_train
res.describe()<jupyter_output><empty_output><jupyter_text>### Coefficients <jupyter_code>model = MixedLM(endog=y, exog = X.drop(['ccg'],axis=1).values,
groups=X.ccg.values)
ml_fit = model.fit()
ml_fit.summary()<jupyter_output><empty_output><jupyter_text>### Residuals <jupyter_code>fig = plt.figure(figsize = (8, 5))
ax = sns.distplot(ml_fit.resid, hist = False, kde_kws = {"shade" : True, "lw": 1}, fit = stats.norm)
ax.set_title("KDE Plot of Model Residuals (Blue) and Normal Distribution (Black)")
ax.set_xlabel("Residuals")
plt.show()<jupyter_output><empty_output><jupyter_text>### Q-Q Plot <jupyter_code>fig = plt.figure(figsize = (8, 5))
ax = fig.add_subplot(111)
sm.qqplot(ml_fit.resid, dist = stats.norm, line = 's', ax = ax)
ax.set_title("Q-Q Plot")
plt.show()<jupyter_output><empty_output><jupyter_text>### Residuals by CCG <jupyter_code>fig = plt.figure(figsize = (15, 5))
ax = sns.boxplot(x = ml_fit.model.groups, y = ml_fit.resid)
ax.set_title("Distribution of Residuals for ED attendances by CCG ")
ax.set_ylabel("Residuals", fontsize=12)
ax.set_xlabel("CCG", fontsize=12)
plt.xticks(rotation=70)
plt.show()<jupyter_output><empty_output>
|
non_permissive
|
/pages/_build/jupyter_execute/multi_level_model.ipynb
|
CharlotteJames/ed-forecast
| 9 |
<jupyter_start><jupyter_text>## 탐욕 알고리즘의 이해### 1. 탐욕 알고리즘 이란?
- Greedy algorithm 또는 탐욕 알고리즘 이라고 불리움
- 최적의 해에 가까운 값을 구하기 위해 사용됨
- 여러 경우 중 하나를 결정해야할 때마다, **매순간 최적이라고 생각되는 경우를 선택**하는 방식으로 진행해서, 최종적인 값을 구하는 방식### 2. 탐욕 알고리즘 예
### 문제1: 동전 문제
- 지불해야 하는 값이 4720원 일 때 1원 50원 100원, 500원 동전으로 동전의 수가 가장 적게 지불하시오.
- 가장 큰 동전부터 최대한 지불해야 하는 값을 채우는 방식으로 구현 가능
- 탐욕 알고리즘으로 매순간 최적이라고 생각되는 경우를 선택하면 됨<jupyter_code>coin_list = [1, 100, 50, 500]
print (coin_list)
coin_list.sort(reverse=True)
print (coin_list)
coin_list = [500, 100, 50, 1]
def min_coin_count(value, coin_list):
total_coin_count = 0
details = list()
coin_list.sort(reverse=True)
for coin in coin_list:
coin_num = value // coin
total_coin_count += coin_num
value -= coin_num * coin
details.append([coin, coin_num])
return total_coin_count, details
min_coin_count(4720, coin_list)<jupyter_output><empty_output><jupyter_text>### 문제2: 부분 배낭 문제 (Fractional Knapsack Problem)
- 무게 제한이 k인 배낭에 최대 가치를 가지도록 물건을 넣는 문제
- 각 물건은 무게(w)와 가치(v)로 표현될 수 있음
- 물건은 쪼갤 수 있으므로 물건의 일부분이 배낭에 넣어질 수 있음, 그래서 Fractional Knapsack Problem 으로 부름
- Fractional Knapsack Problem 의 반대로 물건을 쪼개서 넣을 수 없는 배낭 문제도 존재함 (0/1 Knapsack Problem 으로 부름)
<jupyter_code>data_list = [(10, 10), (15, 12), (20, 10), (25, 8), (30, 5)]
def get_max_value(data_list, capacity):
data_list = sorted(data_list, key=lambda x: x[1] / x[0], reverse=True)
total_value = 0
details = list()
for data in data_list:
if capacity - data[0] >= 0:
capacity -= data[0]
total_value += data[1]
details.append([data[0], data[1], 1])
else:
fraction = capacity / data[0]
total_value += data[1] * fraction
details.append([data[0], data[1], fraction])
break
return total_value, details
get_max_value(data_list, 30)<jupyter_output><empty_output>
|
no_license
|
/.ipynb_checkpoints/Chapter19-탐욕 알고리즘-checkpoint.ipynb
|
psh5487/DS-Algorithm
| 2 |
<jupyter_start><jupyter_text>## Emission data exploring and cleaning
This file contains the major exploration and cleaning that has been done with the emission data.<jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>## Explanation of agriculture emissions datafiles#### Emission data number 1
The first emissions data in our dataset covers all emissions from agriculture. That is, both the emissions from crops and livestock production. <jupyter_code>emission_data = pd.read_csv('raw_data/emission_data_continent.csv', sep = ',', encoding = 'latin-1')
emission_data.head(3)
def explain_df(df):
print('The data contain(s) the following: ')
print(f' area(s) : {(df.Area.unique().tolist())}')
print(f' years : {(df.Year.min())} - {df.Year.max()}')
print(f' item(s) : {(df.Item.unique().tolist())}')
print(f' elements(s): {(df.Element.unique().tolist())}')
print(f' unit(s) : {(df.Unit.unique().tolist())}')
explain_df(emission_data)
<jupyter_output>The data contain(s) the following:
area(s) : ['Africa', 'Northern America', 'South America', 'Asia', 'Europe', 'Oceania']
years : 1961 - 2017
item(s) : ['Agriculture total']
elements(s): ['Emissions (CO2eq)']
unit(s) : ['gigagrams']
<jupyter_text>As you can see from the print above we have data from all the continents, between 1961 and 2017, only agriculture total emission, all data is in CO2 gigagram.
Lets examine if there's any data missing.<jupyter_code># Is any information missing?
print("Missing information in categorized dataset:\n", emission_data.isna().sum())<jupyter_output>Missing information in categorized dataset:
Domain Code 0
Domain 0
Area Code 0
Area 0
Element Code 0
Element 0
Item Code 0
Item 0
Year Code 0
Year 0
Unit 0
Value 0
Flag 0
Flag Description 0
Note 342
dtype: int64
<jupyter_text>Data seems good, no missing values, lets plot the data to get a sense of how it looks.<jupyter_code>fig = plt.figure(figsize = (15,8))
for area in emission_data.Area.unique():
plt.plot(emission_data[emission_data['Area'] == area].Year.values,
emission_data[emission_data['Area'] == area].Value.values)
plt.legend(emission_data.Area.unique())
plt.title('Agriculture emission per continent')
plt.xlabel('Year')
plt.ylabel('Amount (gigagrams)')
plt.show()<jupyter_output><empty_output><jupyter_text>#### Plot
The data does not have any extreme outliers and without any particular knowledge in agriculture emissions, the data seems to make sense.<jupyter_code># Save to pickle
emission_data.to_pickle('./data/pickles/agriculture_emissions_continents.pkl')<jupyter_output><empty_output><jupyter_text>#### Emission data number 2
We also have another emissions file with some categories to easier be able to determine differences in emission between crops and livestock production. This data contains each country a<jupyter_code>categorized_emission_data = pd.read_csv('raw_data/Emissions_cat.csv', sep = ',', encoding = 'latin-1')
explain_df(categorized_emission_data)<jupyter_output>The data contain(s) the following:
area(s) : ['World', 'Africa', 'Northern America', 'South America', 'Asia', 'Europe', 'Oceania']
years : 1961 - 2017
item(s) : ['Cereals excluding rice', 'Rice, paddy', 'Meat, cattle', 'Milk, whole fresh cow', 'Meat, goat', 'Milk, whole fresh goat', 'Meat, buffalo', 'Milk, whole fresh buffalo', 'Meat, sheep', 'Milk, whole fresh sheep', 'Milk, whole fresh camel', 'Meat, chicken', 'Eggs, hen, in shell', 'Meat, pig']
elements(s): ['Emissions (CO2eq)', 'Production']
unit(s) : ['gigagrams', 'tonnes']
<jupyter_text>This dataset contains data about the CO2 emissions coming from various areas (_items_) such as _Rice_ an _Meat, sheep_ etc. Some of the data has information about production of each item which is not out of interest as we are interested in emissions. We will therefore remove rows with these values.<jupyter_code># Keep only emission data
categorized_emission_data = categorized_emission_data[categorized_emission_data.Element == 'Emissions (CO2eq)']
explain_df(categorized_emission_data)
# Is any information missing?
print("Missing information in categorized dataset:\n", categorized_emission_data.isna().sum())
categorized_emission_data.head(3)<jupyter_output>The data contain(s) the following:
area(s) : ['World', 'Africa', 'Northern America', 'South America', 'Asia', 'Europe', 'Oceania']
years : 1961 - 2017
item(s) : ['Cereals excluding rice', 'Rice, paddy', 'Meat, cattle', 'Milk, whole fresh cow', 'Meat, goat', 'Milk, whole fresh goat', 'Meat, buffalo', 'Milk, whole fresh buffalo', 'Meat, sheep', 'Milk, whole fresh sheep', 'Milk, whole fresh camel', 'Meat, chicken', 'Eggs, hen, in shell', 'Meat, pig']
elements(s): ['Emissions (CO2eq)']
unit(s) : ['gigagrams']
Missing information in categorized dataset:
Domain Code 0
Domain 0
Area Code 0
Area 0
Element Code 0
Element 0
Item Code 0
Item 0
Year Code 0
Year 0
Unit 0
Value 0
Flag 0
Flag Description 0
dtype: int64
<jupyter_text>We can see that this dataset is similar to the previous one apart from the _item_ column which contains multiple values, this is what we wanted. Futhermore we have no missing values. Let's save this file.<jupyter_code>categorized_emission_data.to_pickle('./data/pickles/agriculture_emissions_continents_categorized.pkl')<jupyter_output><empty_output><jupyter_text>**For now we will stick to the first dataset discussed in this notebook since we are more interested in the total agriculture emissions.**
Let's plot the _Agriculture, total_ emissions from the dataset 1. Let's also make use of the population data so we can normalize the data on the population of a given continent or the world.<jupyter_code># Get population data
population = pd.read_csv('./data/csv/pop_continents.csv')
area_year_population = population.groupby(['Area', 'Year']).agg({'Value':'sum','Unit':'first'}).reset_index()
world_year_population = area_year_population.groupby('Year').agg({'Value':'sum'}).reset_index()
import warnings
import seaborn as sns; sns.set()
warnings.filterwarnings('ignore')
area_year_emission = emission_data.groupby(['Area', 'Year']).agg({'Value':'sum','Unit':'first'}).reset_index()
area_year_emission = area_year_emission[(area_year_emission['Year'] <= 2013)]
x = area_year_emission.Year.unique()
plot_y_emi = []
mycolors = ['tab:red', 'tab:blue', 'tab:green', 'tab:orange', 'tab:pink']
fig = plt.figure(figsize = (15,12))
areas = area_year_emission.Area.unique()
for area in areas:
plt.subplot(2,2,3)
plt.plot(area_year_emission[area_year_emission['Area'] == area].Year.values,
area_year_emission[area_year_emission['Area'] == area].Value.values/
area_year_population[area_year_population['Area'] == area].Value.values)
plt.subplot(2,2,1)
plt.plot(area_year_emission[area_year_emission['Area'] == area].Year.values,
area_year_emission[area_year_emission['Area'] == area].Value.values)
plot_y_emi.append(area_year_emission[area_year_emission['Area'] == area].Value.values.tolist())
plt.subplot(2,2,1)
plt.legend(areas)
plt.title('(1) Agriculture emission per continent')
plt.xlabel('Year')
plt.ylabel('Amount (gigagrams)')
plt.subplot(2,2,3)
plt.legend(areas)
plt.title('(3) Agriculture emission per continent, normalized on continent population')
plt.xlabel('Year')
plt.ylabel('Amount (gigagrams)')
# Global yearly emission
total_year_emission = area_year_emission.groupby('Year').agg({'Value':'sum'}).reset_index()
total_year_emission = total_year_emission[total_year_emission['Year'] <= 2013]
plt.subplot(2,2,2)
plt.stackplot(x, np.vstack(plot_y_emi), labels=area_year_emission.Area.unique(), alpha=0.8)
plt.legend(loc='upper left')
plt.title('(2) Total world agriculture emission')
plt.xlabel('Year')
plt.ylabel('Amount (gigagrams)')
plt.subplot(2,2,4)
plt.plot(total_year_emission['Year'].values,
total_year_emission['Value'].values/world_year_population['Value'].values)
plt.legend(['Total all continents'])
plt.title('(4) Total, normalized on population, world agriculture emission')
plt.xlabel('Year')
plt.ylabel('Amount (gigagrams)')
plt.show()<jupyter_output><empty_output><jupyter_text>##### Emission area plot
There's a few questions that arise when observing the plots above:
- We can observe a big increase in emission around year 1990 which seem slightly weird.
- In the normalized continent emissions _Oceania_ has a much greater and volatile normalized emission compared to the other continents. The relatively higher normalized emission by _Oceania_ could be reasonable given that they have a lot of livestock production, however the sudden jumps in value from year to year seem off.
Let's try and find an answer to these obeservations.
The reason that the emission jumps in 1990 is most likely because of _New estimates of CO 2 forest emissions and removals: 1990–2015._ Which is stated [at faostat](http://www.fao.org/faostat/en/#data/GT/metadata), where the data comes from, in point number 10.
Now, when it comes to the weird behaviour in _Oceania_'s emissions when should look at the data we have acquired regarding the crop and livestock production aswell as the population that _Oceania_ has in order to explain the weirdness.
<jupyter_code>fig = plt.figure(figsize=(20,5))
# Oceania's population
plt.subplot(1,3,1)
plt.plot(area_year_population.Year.unique(), area_year_population[area_year_population.Area == 'Oceania'].Value.values)
plt.title('Oceania\'s population', fontsize=15)
plt.xlabel('Year')
plt.ylabel('Population (1000 person)')
# Oceania's livestock production
plt.subplot(1,3,2)
meat_production_continents = pd.read_csv('data/csv/meat_continents.csv')
meat_production_oceania = meat_production_continents[(meat_production_continents.Area == 'Oceania') \
& (meat_production_continents.Item == 'Meat, Total')]
plt.plot(meat_production_oceania.Year.unique(), meat_production_oceania.Value.values)
plt.title('Oceania\'s livestock production', fontsize=15)
plt.xlabel('Year')
plt.ylabel('Amount (tonnes)')
# Oceania's crops production
crop_production_continents = pd.read_pickle('data/pickles/crops_continents.pkl')
crop_production_oceania = crop_production_continents[(crop_production_continents.Area == 'Oceania')]\
.groupby('Year').agg({'Value':'sum'}).reset_index()
plt.subplot(1,3,3)
plt.plot(crop_production_oceania.Year.unique(), crop_production_oceania.Value.values)
plt.title('Oceania\'s crop production', fontsize=15)
plt.xlabel('Year')
plt.ylabel('Amount (tonnes)')
plt.show()<jupyter_output><empty_output><jupyter_text>We can see that there is nothing that seems to be weird with the _Oceania_'s population data, however the livestock and especially the crop production is volatile. This corresponds with the diffrences that we can observe in the agriculture emission caused by _Oceania_. Since _Oceania_ is the continent which produces the least amount of CO2 from agriculture compared to all other continents it is more likely to be volatile. For example, if the weather is bad for agriculture, it is more likely that the total amount will be affected compared to a bigger continent that is not as concentrated around a specific area.
#### Summarize
The data looks good and we are ready to move further with our analysis and use this data to answer the questions.<jupyter_code># Correlations
corr = np.corrcoef(x = total_year_emission.Value.values, y = world_year_population.Value.values)[0,1]
print(f'Correlation between agriculture emission and population is {round(corr,2)}')<jupyter_output>Correlation between agriculture emission and population is 0.99
<jupyter_text>#### Example use of second dataset<jupyter_code>sheep_prod = categorized_emission_data[categorized_emission_data.Item.str.contains('sheep')]
sheep_data = sheep_prod.groupby(['Area','Element','Year','Unit']).agg({'Value':'sum'})
sheep_data['Item'] = 'Sheep'
# Data for each countries sheep emissions per year
sheep_data.reset_index()<jupyter_output><empty_output>
|
no_license
|
/data_cleaning/emissions_cleaning.ipynb
|
verafristedt/ADA-2019-project1-sovellettu-tiedon-analysoint
| 12 |
<jupyter_start><jupyter_text># OMS file parser
1. <jupyter_code>import pandas as pd
df = pd.read_excel('2017年12月新能源利用小时.xlsx', header=1)
df
df#.drop_duplicates(['风电场名称'], keep=False).dropna()
df_tmp[df_tmp['风电场名称'].str.contains(r'平均')]<jupyter_output><empty_output>
|
no_license
|
/Untitled1.ipynb
|
anonymous-void/XinNengYuanReport
| 1 |
<jupyter_start><jupyter_text># loading the dataset<jupyter_code>import pandas as pd
def remove_airline_nameTag_and_links(tweets):
cleaned_tweets = []
for tweet in tweets:
tweet_words = tweet.split()
for word in tweet_words:
if(word.startswith('@')):
tweet_words.remove(word)
if(word.startswith('http')):
tweet_words.remove(word)
tweet_sent = ' '.join(word for word in tweet_words)
cleaned_tweets.append(tweet_sent)
return cleaned_tweets
import emoji
def remove_emoji(tweets):
cleaned_tweet = []
for tweet in tweets:
tweet_sent = "".join(char for char in tweet if((char not in emoji.UNICODE_EMOJI) and (char < '0' or char > '9')))
cleaned_tweet.append(tweet_sent)
return cleaned_tweet
import numpy as np
df1 = pd.read_excel('emoji_sentiment_data.xlsx')
emoji_sentiment = {}
index = 0
for unicode in df1['Unicode codepoint']:
sentiment = np.array([df1['Negative'].iloc[index],
df1['Neutral'].iloc[index],
df1['Positive'].iloc[index]])
max_senti_pos = sentiment.argmax()
if(max_senti_pos == 0):
emoji_sentiment[unicode] = 'sad'
elif(max_senti_pos == 1):
emoji_sentiment[unicode] = 'neutral'
elif(max_senti_pos == 2):
emoji_sentiment[unicode] = 'happy'
index+=1
import numpy as np
def replace_emoji(tweets):
cleaned_tweet = []
replaced_sentiment = []
for tweet in tweets:
tweet_sent = " ".join(emoji_sentiment[hex(ord(char))] for char in tweet if(char in emoji.UNICODE_EMOJI and (hex(ord(char)) in emoji_sentiment)))
replaced_sentiment.append(tweet_sent)
remove_emoji_tweets = remove_emoji(tweets)
total_tweets = len(remove_emoji_tweets)
for tweet_no in range(total_tweets):
cleaned_tweet.append(remove_emoji_tweets[tweet_no] + replaced_sentiment[tweet_no])
return cleaned_tweet
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import string
from nltk import pos_tag
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
def simple_posTag(tag):
if(tag.startswith('J')):
return wordnet.ADJ
elif(tag.startswith('V')):
return wordnet.VERB
elif(tag.startswith('N')):
return wordnet.NOUN
elif(tag.startswith('R')):
return wordnet.ADV
else:
return wordnet.NOUN
def remove_stopwords_and_lemmatization_with_posTag(tweets):
stopWords = stopwords.words('english') + list(string.punctuation)
tweets_lemmatized = []
for tweet in tweets:
tweet_words = word_tokenize(tweet.lower())
lemmatizer = WordNetLemmatizer()
lemmatize_tweet = [lemmatizer.lemmatize(word,pos=simple_posTag(pos_tag(word)[0][1])) for word in tweet_words if(word not in stopWords)]
tweets_lemmatized.append(" ".join(lemmatize_tweet))
return tweets_lemmatized
def building_dataset_with_emogi(tweets, count_vec):
tweets_without_nameTag_and_links = remove_airline_nameTag_and_links(tweets)
tweets_emoji_replace = replace_emoji(tweets_without_nameTag_and_links)
lemmatized_tweets_with_emoji = remove_stopwords_and_lemmatization_with_posTag(tweets_emoji_replace)
dataset = count_vec.fit_transform(lemmatized_tweets_with_emoji).todense()
features = count_vec.get_feature_names()
return features, dataset
def building_dataset_without_emogi(tweets, count_vec):
tweets_without_nameTag_and_links = remove_airline_nameTag_and_links(tweets)
tweets_without_emoji = remove_emoji(tweets_without_nameTag_and_links)
lemmatized_tweets_without_emoji = remove_stopwords_and_lemmatization_with_posTag(tweets_without_emoji)
dataset = count_vec.fit_transform(lemmatized_tweets_without_emoji).todense()
features = count_vec.get_feature_names()
return features, dataset
import pandas as pd
df = pd.read_csv('training_twitter_x_y_train.csv')
tweets = df['text']
sentiments = df['airline_sentiment']
from sklearn.feature_extraction.text import CountVectorizer
count_vec1 = CountVectorizer(max_features = 3000, ngram_range = (1,3))
features_emoji, train_dataset_emoji = building_dataset_with_emogi(tweets, count_vec1)
features_without_emoji, train_dataset_without_emoji = building_dataset_without_emogi(tweets, count_vec1)
train_dataset_emoji.shape, train_dataset_without_emoji.shape
from sklearn.model_selection import train_test_split
x_train_emoji, x_test_emoji, y_train_emoji, y_test_emoji = train_test_split(train_dataset_emoji, sentiments, random_state = 1234)
x_train_without_emoji, x_test_without_emoji, y_train_without_emoji, y_test_without_emoji = train_test_split(train_dataset_without_emoji, sentiments, random_state = 1234)
def get_n_components(data):
pca = PCA()
data_transform = pca.fit_transform(data)
n_components = 0
total_variance = sum(pca.explained_variance_)
current_variance = 0
while(current_variance/total_variance <= 0.99):
current_variance += pca.explained_variance_[n_components]
n_components += 1
return n_components
from sklearn.decomposition import PCA
pca_emoji = PCA(n_components = get_n_components(x_train_emoji), whiten = True)
x_train_emoji_pca = pca_emoji.fit_transform(x_train_emoji)
x_test_emoji_pca = pca_emoji.transform(x_test_emoji)
from sklearn.svm import SVC
#from sklearn.model_selection import GridSearchCV
svm_clf = SVC()
#grid = {"C" : [10**i for i in range(4)], "gamma" : [10**i for i in range(-2,3)]}
#best_svc = GridSearchCV(svm_clf, grid)
#best_svc.fit(x_train_emoji,sentiments)
#best_svc.best_estimator_
svm_clf.fit(x_train_emoji_pca, y_train_emoji)
svm_clf.score(x_train_emoji_pca, y_train_emoji)
svm_clf.score(x_test_emoji_pca, y_test_emoji)
from sklearn.decomposition import PCA
pca_without_emoji = PCA(n_components = get_n_components(x_train_without_emoji), whiten = True)
x_train_without_emoji_pca = pca_without_emoji.fit_transform(x_train_without_emoji)
x_test_without_emoji_pca = pca_without_emoji.transform(x_test_without_emoji)
svm_clf1 = SVC()
#grid = {"C" : [10**i for i in range(4)], "gamma" : [10**i for i in range(-2,3)]}
#best_svc = GridSearchCV(svm_clf, grid)
#best_svc.fit(x_train_emoji,sentiments)
#best_svc.best_estimator_
svm_clf1.fit(x_train_without_emoji_pca, y_train_without_emoji)
svm_clf1.score(x_train_without_emoji_pca, y_train_without_emoji)
svm_clf1.score(x_test_without_emoji_pca, y_test_without_emoji)<jupyter_output><empty_output>
|
no_license
|
/twitter sentiments.ipynb
|
divyansh220199/twitter-sentiments
| 1 |
<jupyter_start><jupyter_text># Exercise 04
Estimate a regression using the Capital Bikeshare data
## Forecast use of a city bikeshare system
We'll be working with a dataset from Capital Bikeshare that was used in a Kaggle competition ([data dictionary](https://www.kaggle.com/c/bike-sharing-demand/data)).
Get started on this competition through Kaggle Scripts
Bike sharing systems are a means of renting bicycles where the process of obtaining membership, rental, and bike return is automated via a network of kiosk locations throughout a city. Using these systems, people are able rent a bike from a one location and return it to a different place on an as-needed basis. Currently, there are over 500 bike-sharing programs around the world.
The data generated by these systems makes them attractive for researchers because the duration of travel, departure location, arrival location, and time elapsed is explicitly recorded. Bike sharing systems therefore function as a sensor network, which can be used for studying mobility in a city. In this competition, participants are asked to combine historical usage patterns with weather data in order to forecast bike rental demand in the Capital Bikeshare program in Washington, D.C.<jupyter_code>import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# read the data and set the datetime as the index
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/bikeshare.csv'
bikes = pd.read_csv(url, index_col='datetime', parse_dates=True)
# "count" is a method, so it's best to name that column something else
bikes.rename(columns={'count':'total'}, inplace=True)
bikes.head()<jupyter_output><empty_output><jupyter_text>* datetime - hourly date + timestamp
* season -
* 1 = spring
* 2 = summer
* 3 = fall
* 4 = winter
* holiday - whether the day is considered a holiday
* workingday - whether the day is neither a weekend nor holiday
* weather -
* 1: Clear, Few clouds, Partly cloudy, Partly cloudy
* 2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist
* 3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds
* 4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog
* temp - temperature in Celsius
* atemp - "feels like" temperature in Celsius
* humidity - relative humidity
* windspeed - wind speed
* casual - number of non-registered user rentals initiated
* registered - number of registered user rentals initiated
* total - number of total rentals<jupyter_code>bikes.shape<jupyter_output><empty_output><jupyter_text># Exercise 4.1
What is the relation between the temperature and total?
For a one percent increase in temperature how much the bikes shares increases?
Using sklearn estimate a linear regression and predict the total bikes share when the temperature is 31 degrees <jupyter_code># Pandas scatter plot
bikes.plot(kind='scatter', x='temp', y='total', alpha=0.2)
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
feature_cols = ['temp']
X = bikes[feature_cols]
y = bikes.total
linreg.fit(X, y)
linreg.intercept_, linreg.coef_
linreg.intercept_ + linreg.coef_ * 31<jupyter_output><empty_output><jupyter_text>La relación entre la temperatura y el total de bicicletas rentadas al parecer es directamente proporcional, ya que a medida que aumenta la temperatura aumenta el alquiler teniendo su máximo a una temperatura media, en los extremos en temperaturas muy bajas o temperaturas muy altas sucede
lo esperado las personas deciden no utilizar el servicio ya que la temperatura no lo permite.Por un aumento del uno por ciento en la temperatura aumenta en 9 el número de bicicletas que son alguiladas. Cuando la temperatura es de 31 grados se pronostíca que serán alguiladas 290 bicicletas.# Exercise 04.2
Evaluate the model using the MSE<jupyter_code>from sklearn.linear_model import SGDRegressor
from sklearn.linear_model import LinearRegression
clf1 = LinearRegression()
clf1.fit(bikes[['temp']], bikes['total'])
x = np.array([9.84])
clf1.predict(x.reshape(1, -1))
y_pred = clf1.predict(bikes[['temp']])
from sklearn import metrics
import numpy as np
print('MSE:', metrics.mean_squared_error(bikes['total'], y_pred))<jupyter_output>MSE: 27705.2238053
<jupyter_text># Exercise 04.3
Does the scale of the features matter?
Let's say that temperature was measured in Fahrenheit, rather than Celsius. How would that affect the model?<jupyter_code>bikes['tempFahren'] = 0
bikes.sample(5)
bikes['tempFahren'] = (bikes.temp*1.8)+32
bikes.head()
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
feature_cols = ['tempFahren']
X = bikes[feature_cols]
y = bikes.total
linreg.fit(X, y)
linreg.intercept_, linreg.coef_
linreg.intercept_ + linreg.coef_ * 87.8
from sklearn.linear_model import SGDRegressor
from sklearn.linear_model import LinearRegression
clf1 = LinearRegression()
clf1.fit(bikes[['tempFahren']], bikes['total'])
x = np.array([49.712])
clf1.predict(x.reshape(1, -1))
y_pred = clf1.predict(bikes[['tempFahren']])
from sklearn import metrics
import numpy as np
print('MSE:', metrics.mean_squared_error(bikes['total'], y_pred))<jupyter_output>MSE: 27705.2238053
<jupyter_text>No tiene influencia la escala en la cual se manejen las variables del ejercicio ya que al realizar la comparación entre la temperatura medida en grados Celsius y la temperatura medida en grados Fahrenheit, el pronóstico obtenido a partir del modelo lineal es el mismo respecto a la cantidad de bicicletas rentadas y el Error Cuadrático Medio es exactamente igual en los dos modelos.
# Exercise 04.4
Run a regression model using as features the temperature and temperature$^2$ using the OLS equations<jupyter_code>bikes['temp2'] = 0
bikes.head()
bikes['temp2'] = bikes.temp**2
bikes.head()
X= bikes[['temp', 'temp2']].values
y= bikes['total'].values
n_samples = X.shape[0]
X_ = np.c_[np.ones(n_samples), X]
beta = np.dot(np.linalg.inv(np.dot(X_.T, X_)),np.dot(X_.T, y))
beta<jupyter_output><empty_output><jupyter_text># Exercise 04.5
Data visualization.
What behavior is unexpected?<jupyter_code># explore more features
feature_cols = ['temp', 'season', 'weather', 'humidity']
# multiple scatter plots in Pandas
fig, axs = plt.subplots(1, len(feature_cols), sharey=True)
for index, feature in enumerate(feature_cols):
bikes.plot(kind='scatter', x=feature, y='total', ax=axs[index], figsize=(16, 3))<jupyter_output><empty_output><jupyter_text>Are you seeing anything that you did not expect? seasons:
* 1 = spring
* 2 = summer
* 3 = fall
* 4 = winter <jupyter_code># pivot table of season and month
month = bikes.index.month
pd.pivot_table(bikes, index='season', columns=month, values='temp', aggfunc=np.count_nonzero).fillna(0)
# box plot of rentals, grouped by season
bikes.boxplot(column='total', by='season')
# line plot of rentals
bikes.total.plot()<jupyter_output><empty_output><jupyter_text>### Solution hereRespecto a los gráficos el comportamiento inesperado es el gráfico sobre las estaciones ya que al parecer la cantidad de bicicletas rentadas durante el año no se ven tan afectadas por el tipo de estación, aunque aumenta en un pequeño porcentaje las bicicletas rentadas en otoño e invierno, lo cual es curioso sobre todo en invierno ya que es una época donde se tienen temperaturas muy bajas y se esperaría que las personas no renten tantas bicicletas.# Exercise 04.6
Estimate a regression using more features ['temp', 'season', 'weather', 'humidity'].
How is the performance compared to using only the temperature?<jupyter_code>from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
feature_cols = ['temp','season','weather','humidity']
X = bikes[feature_cols]
y = bikes.total
linreg.fit(X, y)
linreg.intercept_, linreg.coef_
from sklearn.linear_model import SGDRegressor
from sklearn.linear_model import LinearRegression
clf1 = LinearRegression()
clf1.fit(bikes[['season','weather','temp', 'humidity']], bikes['total'])
x = np.array([1, 1, 9.84, 81])
clf1.predict(x.reshape(1, -1))
y_pred = clf1.predict(bikes[['season','weather','temp', 'humidity']])
from sklearn import metrics
import numpy as np
print('MSE:', metrics.mean_squared_error(bikes['total'], y_pred))<jupyter_output>MSE: 24335.4779775
<jupyter_text>Al estimar la regresión con más variables se obtiene un mejor desempeño del modelo respecto a la regresión que se hace teniendo en cuenta sólo la temperatura ya que al comparar el MSE, es menor el error en la estimación del modelo con más variables, ya que estas variables pueden estar explicando mejor el modelo. # Exercise 04.7 (3 points)
Split randomly the data in train and test
Which of the following models is the best in the testing set?
* ['temp', 'season', 'weather', 'humidity']
* ['temp', 'season', 'weather']
* ['temp', 'season', 'humidity']
Modelo:
* ['temp', 'season', 'weather', 'humidity']<jupyter_code>del bikes['holiday']
del bikes['workingday']
del bikes['atemp']
del bikes['windspeed']
del bikes['casual']
del bikes['registered']
bikes.head()
feature_cols = ['season', 'weather','temp','humidity']
X = bikes[feature_cols]
y = bikes.total
np.random.seed (1234)
random_sample = np.random.rand(y.shape[0])
X_train, X_test = X[random_sample<0.7], X[random_sample>=0.7]
Y_train, Y_test = Y[random_sample<0.7], Y[random_sample>=0.7]
print(Y_train.shape, Y_test.shape)
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X_train, Y_train)
y_pred = linreg.predict(X_test)
from sklearn import metrics
import numpy as np
print('MSE:', metrics.mean_squared_error(Y_test, y_pred))<jupyter_output>MSE: 24247.2210993
<jupyter_text>Modelo:
* ['temp', 'season', 'weather']<jupyter_code>del bikes['holiday']
del bikes['workingday']
del bikes['atemp']
del bikes['windspeed']
del bikes['casual']
del bikes['registered']
del bikes['humidity']
bikes.head()
feature_cols = ['season', 'weather','temp']
X = bikes[feature_cols]
y = bikes.total
np.random.seed (1234)
random_sample = np.random.rand(y.shape[0])
X_train, X_test = X[random_sample<0.7], X[random_sample>=0.7]
Y_train, Y_test = Y[random_sample<0.7], Y[random_sample>=0.7]
print(Y_train.shape, Y_test.shape)
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X_train, Y_train)
y_pred = linreg.predict(X_test)
from sklearn import metrics
import numpy as np
print('MSE:', metrics.mean_squared_error(Y_test, y_pred))<jupyter_output>MSE: 28312.882005
<jupyter_text>Modelo:
* ['temp', 'season', 'humidity']<jupyter_code>del bikes['holiday']
del bikes['workingday']
del bikes['atemp']
del bikes['windspeed']
del bikes['casual']
del bikes['registered']
del bikes['weather']
bikes.head()
feature_cols = ['season','temp','humidity']
X = bikes[feature_cols]
y = bikes.total
np.random.seed (1234)
random_sample = np.random.rand(y.shape[0])
X_train, X_test = X[random_sample<0.7], X[random_sample>=0.7]
Y_train, Y_test = Y[random_sample<0.7], Y[random_sample>=0.7]
print(Y_train.shape, Y_test.shape)
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X_train, Y_train)
y_pred = linreg.predict(X_test)
from sklearn import metrics
import numpy as np
print('MSE:', metrics.mean_squared_error(Y_test, y_pred))<jupyter_output>MSE: 25338.9453515
|
permissive
|
/exercises/04-BikesRent.ipynb
|
estefaniaperalta26-zz/PracticalMachineLearningClass
| 12 |
<jupyter_start><jupyter_text># Modeling and Simulation in Python
Chapter 13
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
<jupyter_code># Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *<jupyter_output><empty_output><jupyter_text>### Code from previous chapters`make_system`, `plot_results`, and `calc_total_infected` are unchanged.<jupyter_code>def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)<jupyter_output><empty_output><jupyter_text>Here's an updated version of `run_simulation` that uses `unpack`.<jupyter_code>def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.row[t0] = init
for t in linrange(t0, t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame<jupyter_output><empty_output><jupyter_text>**Exercise:** Write a version of `update_func` that uses `unpack`.<jupyter_code># Original
def update_func(state, t, system):
"""Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
# Solution goes here<jupyter_output><empty_output><jupyter_text>Test the updated code with this example.<jupyter_code>system = make_system(0.333, 0.25)
results = run_simulation(system, update_func)
results.head()
plot_results(results.S, results.I, results.R)<jupyter_output><empty_output><jupyter_text>### Sweeping betaMake a range of values for `beta`, with constant `gamma`.<jupyter_code>beta_array = linspace(0.1, 1.1, 11)
gamma = 0.25<jupyter_output><empty_output><jupyter_text>Run the simulation once for each value of `beta` and print total infections.<jupyter_code>for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(system.beta, calc_total_infected(results))<jupyter_output>0.1 0.0072309016649785285
0.2 0.038410532615067994
0.30000000000000004 0.33703425948982
0.4 0.6502429153895082
0.5 0.8045061124629623
0.6 0.8862866308018508
0.7000000000000001 0.9316695082755875
0.8 0.9574278300784942
0.9 0.9720993156325133
1.0 0.9803437149675784
1.1 0.9848347293510136
<jupyter_text>Wrap that loop in a function and return a `SweepSeries` object.<jupyter_code>def sweep_beta(beta_array, gamma):
"""Sweep a range of values for beta.
beta_array: array of beta values
gamma: recovery rate
returns: SweepSeries that maps from beta to total infected
"""
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[system.beta] = calc_total_infected(results)
return sweep<jupyter_output><empty_output><jupyter_text>Sweep `beta` and plot the results.<jupyter_code>infected_sweep = sweep_beta(beta_array, gamma)
label = 'gamma = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contacts per day (beta)',
ylabel='Fraction infected')
savefig('figs/chap06-fig01.pdf')<jupyter_output>Saving figure to file figs/chap06-fig01.pdf
<jupyter_text>### Sweeping gammaUsing the same array of values for `beta`<jupyter_code>beta_array<jupyter_output><empty_output><jupyter_text>And now an array of values for `gamma`<jupyter_code>gamma_array = [0.2, 0.4, 0.6, 0.8]<jupyter_output><empty_output><jupyter_text>For each value of `gamma`, sweep `beta` and plot the results.<jupyter_code>for gamma in gamma_array:
infected_sweep = sweep_beta(beta_array, gamma)
label = 'γ = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contacts per day (beta)',
ylabel='Fraction infected',
loc='upper left')
savefig('figs/chap06-fig02.pdf')<jupyter_output>Saving figure to file figs/chap06-fig02.pdf
<jupyter_text>** Exercise:** Suppose the infectious period for the Freshman Plague is known to be 2 days on average, and suppose during one particularly bad year, 40% of the class is infected at some point. Estimate the time between contacts.<jupyter_code># Solution goes here
# Solution goes here
# Solution goes here<jupyter_output><empty_output>
|
permissive
|
/code/chap13.ipynb
|
dreamofjade/ModSimPy
| 13 |
<jupyter_start><jupyter_text>
Regression Models with Keras
## Introduction
As we discussed in the videos, despite the popularity of more powerful libraries such as PyToch and TensorFlow, they are not easy to use and have a steep learning curve. So, for people who are just starting to learn deep learning, there is no better library to use other than the Keras library.
Keras is a high-level API for building deep learning models. It has gained favor for its ease of use and syntactic simplicity facilitating fast development. As you will see in this lab and the other labs in this course, building a very complex deep learning network can be achieved with Keras with only few lines of code. You will appreciate Keras even more, once you learn how to build deep models using PyTorch and TensorFlow in the other courses.
So, in this lab, you will learn how to use the Keras library to build a regression model.
Regression Models with Keras
Objective for this Notebook
1. How to use the Keras library to build a regression model.
2. Download and Clean dataset
3. Build a Neural Network
4. Train and Test the Network.
## Table of Contents
1. Download and Clean Dataset
2. Import Keras
3. Build a Neural Network
4. Train and Test the Network
## Download and Clean Dataset
Let's start by importing the pandas and the Numpy libraries.
<jupyter_code>import pandas as pd
import numpy as np<jupyter_output><empty_output><jupyter_text>We will be playing around with the same dataset that we used in the videos.
The dataset is about the compressive strength of different samples of concrete based on the volumes of the different ingredients that were used to make them. Ingredients include:
1. Cement
2. Blast Furnace Slag
3. Fly Ash
4. Water
5. Superplasticizer
6. Coarse Aggregate
7. Fine Aggregate
Let's download the data and read it into a pandas dataframe.
<jupyter_code>concrete_data = pd.read_csv('https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0101EN/labs/data/concrete_data.csv')
concrete_data.head()<jupyter_output><empty_output><jupyter_text>So the first concrete sample has 540 cubic meter of cement, 0 cubic meter of blast furnace slag, 0 cubic meter of fly ash, 162 cubic meter of water, 2.5 cubic meter of superplaticizer, 1040 cubic meter of coarse aggregate, 676 cubic meter of fine aggregate. Such a concrete mix which is 28 days old, has a compressive strength of 79.99 MPa.
#### Let's check how many data points we have.
<jupyter_code>concrete_data.shape<jupyter_output><empty_output><jupyter_text>So, there are approximately 1000 samples to train our model on. Because of the few samples, we have to be careful not to overfit the training data.
Let's check the dataset for any missing values.
<jupyter_code>concrete_data.describe()
concrete_data.isnull().sum()<jupyter_output><empty_output><jupyter_text>The data looks very clean and is ready to be used to build our model.
#### Split data into predictors and target
The target variable in this problem is the concrete sample strength. Therefore, our predictors will be all the other columns.
<jupyter_code>concrete_data_columns = concrete_data.columns
predictors = concrete_data[concrete_data_columns[concrete_data_columns != 'Strength']] # all columns except Strength
target = concrete_data['Strength'] # Strength column<jupyter_output><empty_output><jupyter_text>
Let's do a quick sanity check of the predictors and the target dataframes.
<jupyter_code>predictors.head()
target.head()<jupyter_output><empty_output><jupyter_text>Finally, the last step is to normalize the data by substracting the mean and dividing by the standard deviation.
<jupyter_code>predictors_norm = (predictors - predictors.mean()) / predictors.std()
predictors_norm.head()<jupyter_output><empty_output><jupyter_text>Let's save the number of predictors to _n_cols_ since we will need this number when building our network.
<jupyter_code>n_cols = predictors_norm.shape[1] # number of predictors<jupyter_output><empty_output><jupyter_text>
## Import Keras
Recall from the videos that Keras normally runs on top of a low-level library such as TensorFlow. This means that to be able to use the Keras library, you will have to install TensorFlow first and when you import the Keras library, it will be explicitly displayed what backend was used to install the Keras library. In CC Labs, we used TensorFlow as the backend to install Keras, so it should clearly print that when we import Keras.
#### Let's go ahead and import the Keras library
<jupyter_code>from tensorflow import keras<jupyter_output><empty_output><jupyter_text>As you can see, the TensorFlow backend was used to install the Keras library.
Let's import the rest of the packages from the Keras library that we will need to build our regressoin model.
<jupyter_code>from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense<jupyter_output><empty_output><jupyter_text>
## Build a Neural Network
Let's define a function that defines our regression model for us so that we can conveniently call it to create our model.
<jupyter_code># define regression model
def regression_model():
# create model
model = Sequential()
model.add(Dense(50, activation='relu', input_shape=(n_cols,)))
model.add(Dense(50, activation='relu'))
model.add(Dense(1))
# compile model
model.compile(optimizer='adam', loss='mean_squared_error')
return model<jupyter_output><empty_output><jupyter_text>The above function create a model that has two hidden layers, each of 50 hidden units.
## Train and Test the Network
Let's call the function now to create our model.
<jupyter_code># build the model
model = regression_model()<jupyter_output><empty_output><jupyter_text>Next, we will train and test the model at the same time using the _fit_ method. We will leave out 30% of the data for validation and we will train the model for 100 epochs.
<jupyter_code># fit the model
model.fit(predictors_norm, target, validation_split=0.3, epochs=100, verbose=2)<jupyter_output>Epoch 1/100
23/23 - 1s - loss: 1615.8882 - val_loss: 1080.0413
Epoch 2/100
23/23 - 0s - loss: 1362.0828 - val_loss: 770.4076
Epoch 3/100
23/23 - 0s - loss: 838.2135 - val_loss: 318.1047
Epoch 4/100
23/23 - 0s - loss: 317.2419 - val_loss: 212.5244
Epoch 5/100
23/23 - 0s - loss: 236.5028 - val_loss: 200.2343
Epoch 6/100
23/23 - 0s - loss: 210.4863 - val_loss: 193.3105
Epoch 7/100
23/23 - 0s - loss: 197.6487 - val_loss: 190.0394
Epoch 8/100
23/23 - 0s - loss: 188.6732 - val_loss: 190.5798
Epoch 9/100
23/23 - 0s - loss: 181.5952 - val_loss: 186.0595
Epoch 10/100
23/23 - 0s - loss: 175.2013 - val_loss: 187.3515
Epoch 11/100
23/23 - 0s - loss: 168.4743 - val_loss: 183.8541
Epoch 12/100
23/23 - 0s - loss: 164.1861 - val_loss: 180.4595
Epoch 13/100
23/23 - 0s - loss: 160.0282 - val_loss: 183.0261
Epoch 14/100
23/23 - 0s - loss: 156.0972 - val_loss: 184.2769
Epoch 15/100
23/23 - 0s - loss: 152.2395 - val_loss: 181.5956
Epoch 16/100
23/23 - 0s - loss: 149.7111 - val_loss: 182.1202
Epoch 17/100
2[...]
|
no_license
|
/DL0101EN-3-1-Regression-with-Keras-py-v1.0__2_.ipynb
|
charii92/dl-nn-keras-ibm
| 13 |
<jupyter_start><jupyter_text>### Calculadora Funcional (mas não muito)
Problemas:
- Não valida
- Não faz loop<jupyter_code>OPERATORS = "+", "-", "/", "*"
def f_get_number():
return int (input("Digite um inteiro: "))
def f_get_operator():
return input("Digite um operador(+,-,*,/): ")
def f_calculate(number1, operator, number2):
return number1+number2 if operator=="+" \
else number1-number2 if operator=="-" \
else number1*number2 if operator=="*" \
else number1/number2 if operator=="/" \
else None
def f_main():
return f_calculate(f_get_number(), f_get_operator(), f_get_number())
print("O resultado é: %d" %f_main())<jupyter_output><empty_output><jupyter_text>### Calculadora Funciona (agora sim)
O que vamos usar:
- Expressões Lambda
- Decorators
- Higher-order functions<jupyter_code>OPERATORS = "+", "-", "/", "*"
def maybe(fnc):
## transforma exceptions em valores de retorno
def inner(*args):
for a in args:
if isinstance(a, Exception):
return a
try:
return fnc(*args)
except Exception as e:
return e
return inner
def repeat(fnc, until):
## Until receberá uma func.
## repete uma fuunção até que o valor retornado em until atenda ao acritério de parada
def inner(*args):
while True:
result = fnc(*args)
if until(result):
return result
return inner
is_int = lambda i: isinstance(i, int)
get_number = lambda: int (input("Digite um número inteiro: "))
safe_get_number = repeat(maybe(get_number), until = is_int)
is_operator = lambda o: o in OPERATORS
get_operator = lambda: input("Digite um operador(+,-,*,/): ")
safe_get_operator = repeat(get_operator, until = is_operator)
calculate = lambda number1, operator, number2: number1+number2 if operator=="+" \
else number1-number2 if operator=="-" \
else number1*number2 if operator=="*" \
else number1/number2 if operator=="/" \
else None
main = lambda : calculate (get_number(), get_operator(), get_number())
forever = lambda retval: False
main_loop = repeat(lambda: print(main()), until = forever)
main_loop()<jupyter_output>Digite um número inteiro: 1
Digite um operador(+,-,*,/): +
Digite um número inteiro: 2
3
Digite um número inteiro: 1
Digite um operador(+,-,*,/): ++
Digite um número inteiro: 2
None
Digite um número inteiro: e
|
no_license
|
/Aulas/32_Calculadora Lambda.ipynb
|
pedrogallon/USJT5_LingProg
| 2 |
<jupyter_start><jupyter_text># K-fold Val<jupyter_code>accuracy_scores = []
prec_scores = []
rec_scores = []
f1_scores = []
import keras.backend.tensorflow_backend as Keras_GPU
from sklearn.model_selection import KFold
from keras.optimizers import Adam
opt = Adam(lr=0.00001)
for i in range(1):
kfold = KFold(n_splits=10, shuffle=True, random_state=i)
for train, test in kfold.split(all_dataframe, y):
K.clear_session()
X_train = index_to_list(all_dataframe, train)
y_train = y[train]
X_test = index_to_list(all_dataframe, test)
y_test = y[test]
X_train = eeg_band_power_seperate(X_train, 1)
X_test = eeg_band_power_seperate(X_test, 0)
X_train = X_train[1:]
X_test = X_test[1:]
model = conv2d_model()
model.compile(optimizer='Adam',
loss='binary_crossentropy',
metrics=['accuracy', recall_m, precision_m, f1_m])
model.fit(X_train, y_train, epochs=20,batch_size=4, verbose=0)
scores = model.evaluate(X_test, y_test, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
print("%s: %.2f%%" % (model.metrics_names[2], scores[2]*100))
print("%s: %.2f%%" % (model.metrics_names[3], scores[3]*100))
print("%s: %.2f%%" % (model.metrics_names[4], scores[4]*100))
print("\n")
accuracy_scores.append(scores[1])
prec_scores.append(scores[2])
rec_scores.append(scores[3])
f1_scores.append(scores[4])
print("%.2f (+/- %.2f%%)" % (np.mean(accuracy_scores), np.std(accuracy_scores)))
print("%.2f (+/- %.2f%%)" % (np.mean(prec_scores), np.std(prec_scores)))
print("%.2f (+/- %.2f%%)" % (np.mean(rec_scores), np.std(rec_scores)))
print("%.2f (+/- %.2f%%)" % (np.mean(f1_scores), np.std(f1_scores)))
model.summary()<jupyter_output>Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 19, 4, 64) 31457344
_________________________________________________________________
dropout_1 (Dropout) (None, 19, 4, 64) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 10, 2, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 1280) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 81984
_________________________________________________________________
dropout_2 (Dropout) (None, 64) 0
______________________________________________________[...]
|
no_license
|
/.ipynb_checkpoints/MDD_DL-band_model-Copy1-checkpoint.ipynb
|
ark1st/MDD
| 1 |
<jupyter_start><jupyter_text>This program is for the additional PyRamen Challenge.
@author Kaleb Nunn<jupyter_code>import csv
import numpy as np
import decimal
def upload_data(path):
data = []
with open(path) as f:
reader = csv.reader(f)
next(reader, None)
for row in reader:
data.append(row)
return data
# item, category, description, price, cost
menu = upload_data('menu_data.csv')
# Line_Item_ID, Date, Credit_Card_Number, Quantity, Menu_Item
sales = upload_data('sales_data.csv')
report = {}
template = {
"01-count": 0,
"02-revenue": 0,
"03-cogs": 0,
"04-profit": 0,
}
for item in sales:
sales_item = item[-1]
quantity = float(item[-2])
add = True
for x in report:
if sales_item == x:
add = False
break
if add:
report[sales_item] = template
found = False
for record in menu:
name = record[0]
price = float(record[-2])
cost = float(record[-1])
profit = price - cost
if sales_item == name:
report[sales_item]["01-count"] += quantity
report[sales_item]["02-revenue"] += price * quantity
report[sales_item]["03-cogs"] += cost * quantity
report[sales_item]["04-profit"] += profit * quantity
found = True
if not found:
print(f"{sales_item} does not equal anything on the menu! NO MATCH!")
report<jupyter_output><empty_output>
|
no_license
|
/PyRamen/.ipynb_checkpoints/main-checkpoint.ipynb
|
kalnun/python-homework
| 1 |
<jupyter_start><jupyter_text>Set up enviroment per google chrome su colab[link opzioni download](https://google-images-download.readthedocs.io/en/latest/arguments.html)<jupyter_code>!pip install selenium
!apt-get update # to update ubuntu to correctly run apt install
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
import sys
sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver')
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
wd = webdriver.Chrome('chromedriver',chrome_options=chrome_options)
wd.get("https://www.webite-url.com")
from google.colab import drive
drive.mount('/content/drive')
!pip install google_images_download
from google_images_download import google_images_download
DRIVER_PATH = '/usr/lib/chromium-browser/chromedriver'
args = {'keywords': 'sad human face', # the query to search in Google Images
'output_directory': '/content/drive/My Drive/AML', # Cartella di destinazione delle immagini scaricate
'image_directory': 'Sad', # Nome della sotto-cartella di destinazione
'color_type': 'full-color',
'silent_mode': True, # limits the number of lines printed to stdout
'limit': 10, # limit for the number of images returned
'chromedriver': DRIVER_PATH}
response = google_images_download.googleimagesdownload()
response.download(args)<jupyter_output>Requirement already satisfied: google_images_download in /usr/local/lib/python3.6/dist-packages (2.8.0)
Requirement already satisfied: selenium in /usr/local/lib/python3.6/dist-packages (from google_images_download) (3.141.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.6/dist-packages (from selenium->google_images_download) (1.24.3)
Downloading images for: sad human face ...
|
no_license
|
/Codici Consegna/Scraping.ipynb
|
malborroni/RECMojion
| 1 |
<jupyter_start><jupyter_text># 1. Data processing
## (a). Bubble Tea Location<jupyter_code>bubble_tea = pd.read_csv('dataset/bubble_location_data.csv')
bubble_tea.head()<jupyter_output><empty_output><jupyter_text>### Count the number of bubble tea shops for each state<jupyter_code>bubble_tea_count = pd.DataFrame(bubble_tea.groupby('state').count().name)
bubble_tea_count.index.name = 'STNAME'
bubble_tea_count.columns = ['COUNT']
bubble_tea_count.head()<jupyter_output><empty_output><jupyter_text># 1.Data processing
## (b). Census Data : Age group for each county<jupyter_code>agesex = pd.read_csv('dataset/cc-est2019-agesex.csv')
agesex.head()
# create age group 14-24
agesex['AGE1424_TOT'] = agesex.AGE1417_TOT + agesex.AGE1824_TOT
# only keep 2019 data
agesex = agesex[agesex.YEAR==12]
agesex.reset_index(drop=True, inplace=True)
# select relevent features
agesex_colname = ["STNAME", "POPESTIMATE",
"AGE1424_TOT", "AGE2544_TOT",
"AGE4564_TOT", "AGE65PLUS_TOT"]
agesex = agesex[agesex_colname]
agesex = agesex.groupby('STNAME').sum().reset_index()
agesex.STNAME = agesex.STNAME.map(lambda x : us_state_abbrev[x])
agesex.set_index('STNAME', inplace=True)
# convert the population to ratio
for name in agesex.columns:
if agesex[name].dtype == int and name != 'POPESTIMATE':
agesex[name] = agesex[name]/agesex["POPESTIMATE"]
agesex.drop(columns='POPESTIMATE', inplace=True)
agesex.head()<jupyter_output><empty_output><jupyter_text># 1.Data processing
## (c). Census Data : Total Population and East Asian population<jupyter_code>east_asian = pd.read_csv('dataset/EAST_ASIAN_POP.csv')
east_asian = east_asian.groupby('STNAME').sum().reset_index()
east_asian.STNAME = east_asian.STNAME.map(lambda x : us_state_abbrev[x])
east_asian.set_index('STNAME', inplace=True)
east_asian.head()<jupyter_output><empty_output><jupyter_text># 1.Data processing
## (d). Merge the three datasets<jupyter_code>df = bubble_tea_count.copy()
df[agesex.columns] = agesex[agesex.columns]
df[east_asian.columns] = east_asian[east_asian.columns]
df.reset_index(inplace=True)<jupyter_output><empty_output><jupyter_text>The population distribution is left skewed. Let's transform it with log function to make a normal distribution<jupyter_code>logdf = df.copy()
for name in logdf.columns:
if logdf[name].dtype==int or logdf[name].dtype==float:
logdf[name] = np.log10(logdf[name])
plt.figure(figsize=(4,6));
ax1 = sn.displot(data=df, x='EAST_ASIAN_POP');
ax1.ax.set_xticks([i*10**5 for i in range(7)]);
ax1.ax.set_xticklabels([r'$%d$'%i for i in range(7)]);
ax1.ax.set_xlabel(r'East Asian Population $ \times 10^5$');
plt.figure(figsize=(4,6));
ax2 = sn.displot(data=logdf, x='EAST_ASIAN_POP');
ax2.ax.set_xticks([i for i in np.linspace(3, 6, 7)]);
ax2.ax.set_xticklabels([r'$10^{%d}$'%i if i.is_integer() else ' ' for i in np.linspace(3, 6, 7)]);
ax2.ax.set_xlabel('East Asian Population');<jupyter_output><empty_output><jupyter_text># 2. Exploratory Data Analysis
## (a). Visualize on map<jupyter_code>usa_map = folium.Map([39.358, -98.118], zoom_start=4, tiles="Stamen toner")
for lat, lon, cat in zip(bubble_tea.latitude, bubble_tea.longitude, bubble_tea.name):
folium.CircleMarker([lat, lon],radius=3, color=None,
fill_color='red',fill_opacity=0.3,
tooltip=cat).add_to(usa_map)
usa_map<jupyter_output><empty_output><jupyter_text># 2. Exploratory Data Analysis
## (b). Number of bubble tea shop & Total population<jupyter_code>plt.figure(figsize=(10,8))
ax1= plt.subplot(121)
ax2= plt.subplot(122)
order = df.sort_values('COUNT',ascending=False)['STNAME']
sn.barplot(data=df, x='COUNT', y='STNAME', order=order[0:20], ax=ax1)
ax1.set_xlabel('Num. Bubble Tea Shop')
ax1.set_ylabel('State')
sn.barplot(data=df, x='TOT_POP', y='STNAME', order=order[0:20], ax=ax2)
ax2.set_xlabel('Population')
ax2.set_ylabel('State')
<jupyter_output><empty_output><jupyter_text># 2. Exploratory Data Analysis
## (c). Number of bubble tea shop & East-Asian population<jupyter_code>ax=sn.regplot(data=logdf, x='EAST_ASIAN_POP', y='COUNT')
ax.set_xlabel('East Asian Population')
ax.set_ylabel('Num. Bubble Tea Shop')
ax.set_xticks([i for i in np.linspace(3, 6, 7)]);
ax.set_yticks([i for i in np.linspace(0, 3, 7)]);
ax.set_xticklabels([r'$10^{%d}$'%i if i.is_integer() else ' ' for i in np.linspace(3, 6, 7)]);
ax.set_yticklabels([r'$10^{%d}$'%i if i.is_integer() else ' ' for i in np.linspace(0, 3, 7)]);<jupyter_output><empty_output><jupyter_text># 2. Exploratory Data Analysis
## (d). Age Group<jupyter_code>df['logcount'] = df.COUNT.apply(np.log10)
ax = sn.scatterplot(data=df, x='AGE2544_TOT', y='logcount')
ax.set_xlabel('Percentage of Aged 25-44')
ax.set_ylabel('Num. Bubble Tea Shop')
ax.set_yticks([i for i in np.linspace(0, 3, 7)]);
ax.set_yticklabels([r'$10^{%d}$'%i if i.is_integer() else ' ' for i in np.linspace(0, 3, 7)]);<jupyter_output><empty_output><jupyter_text>Washington DC has anomalously many people in Age group 25-44.<jupyter_code>plt.figure(figsize=(20,6))
ax1= plt.subplot(121)
ax2= plt.subplot(122)
sn.regplot(data=df, x='AGE1424_TOT', y='logcount', ax=ax1)
ax1.set_xlabel('Percentage of Aged 14-24')
ax1.set_ylabel('Num. Bubble Tea Shop')
ax1.set_yticks([i for i in np.linspace(0, 3, 7)]);
ax1.set_yticklabels([r'$10^{%d}$'%i if i.is_integer() else ' ' for i in np.linspace(0, 3, 7)]);
sn.regplot(data=df, x='AGE2544_TOT', y='logcount', ax=ax2, robust=True)
ax2.set_xlabel('Percentage of Aged 25-44')
ax2.set_ylabel(' ')
ax2.set_yticks([i for i in np.linspace(0, 3, 7)]);
ax2.set_yticklabels([r'$10^{%d}$'%i if i.is_integer() else ' ' for i in np.linspace(0, 3, 7)]);
plt.xlim([0.22, 0.32])
plt.ylim([0.5, 3.5])
df.drop(columns=['logcount'], inplace=True)
df_plot_corr = df.corr()
df_plot_corr.columns = ['Num. Bubble Tea Shop', 'Age 14-24', 'Age 25-44',
'Age 45-64', 'Age 65+', 'Population', 'East-Asian population']
plt.figure(figsize=(10,10))
sn.heatmap(data = df_plot_corr, annot=True, fmt='.2f')
<jupyter_output><empty_output><jupyter_text># 3. Modeling
# (a). Split training and test dataset<jupyter_code>from sklearn.model_selection import train_test_split
X = logdf.copy()
y = X.pop('COUNT')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25,
random_state=1)
X_train = X_train.drop(columns=['STNAME'])
test_label = X_test.pop('STNAME').values
y_test = y_test.values<jupyter_output><empty_output><jupyter_text># 3. Modeling
# (b). Grid Search Hyperparameter Tuning<jupyter_code>#################################################
## Grid Search Hyperparameter Tuning
## I try two different regression models : svr and randomforest
## It turns out that the SVR performs better
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
GS_svr = GridSearchCV(SVR(), param_grid=param_grid, scoring='r2')
GS_svr = GS_svr.fit(X_train, y_train)
print("best svr parameters : ", GS_svr.best_params_)
print("best svr score : ", GS_svr.best_score_)
param_grid = [{'n_estimators':[i for i in range(50, 450, 50)]}]
GS_rf = GridSearchCV(RandomForestRegressor(), param_grid=param_grid, scoring='r2')
GS_rf = GS_rf.fit(X_train, y_train)
print("best Random Forest parameters : ", GS_rf.best_params_)
print("best rf score : ", GS_rf.best_score_)
# best model
model = GS_svr.best_estimator_
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
result = pd.DataFrame(np.array([y_pred, y_test]).T)
result.columns = ['y_predict', 'y_test']
plt.figure(figsize=(10, 10))
plt.plot(np.linspace(0,4,101),np.linspace(0,4,101), 'k--', alpha=0.5)
ax = sn.scatterplot(data=result, x='y_test', y='y_predict',s=40 )
for x,y,lab in zip(result.y_test, result.y_predict, test_label):
ax.text(x+0.05, y-0.1, lab, horizontalalignment='left', size=18, color='black')
labels = [r'$10^{%d}$'%e for i, e in enumerate(np.linspace(0, 4, 9))]
for i in range(len(labels)):
if i%2==1:
labels[i] = ''
ax.set_xlabel('True Num. Bubble Tea Shop')
ax.set_xticks([i for i in np.linspace(0, 4, 9)]);
ax.set_xticklabels(labels);
ax.set_ylabel('Predicted Num. Bubble Tea Shop')
ax.set_yticks([i for i in np.linspace(0, 4, 9)]);
ax.set_yticklabels(labels);<jupyter_output><empty_output><jupyter_text># 3. Modeliing
# (c) Permutation importance<jupyter_code>import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(model).fit(X_test, y_test)
eli5.show_weights(perm,feature_names=list(X_test.columns))
<jupyter_output>/Users/zpcian/Library/Python/3.7/lib/python/site-packages/sklearn/utils/deprecation.py:143: FutureWarning: The sklearn.metrics.scorer module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
warnings.warn(message, FutureWarning)
/Users/zpcian/Library/Python/3.7/lib/python/site-packages/sklearn/utils/deprecation.py:143: FutureWarning: The sklearn.feature_selection.base module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.feature_selection. Anything that cannot be imported from sklearn.feature_selection is now part of the private API.
warnings.warn(message, FutureWarning)
|
no_license
|
/yelp_location_demography.ipynb
|
dretoabasi/Yelp-Reviews-Analysis-for-Bubble-Tea-Shops
| 14 |
<jupyter_start><jupyter_text>### 구글 드라이브 연결<jupyter_code># upload an image
from google.colab import drive
drive.mount('/content/gdrive')<jupyter_output>Mounted at /content/gdrive
<jupyter_text>### 공유 폴더 접속 (mecathon 폴더)<jupyter_code>%cd /content/gdrive/Shareddrives/메카톤2021여름
!mkdir mecathon
%cd mecathon<jupyter_output>/content/gdrive/Shareddrives/메카톤2021여름
mkdir: cannot create directory ‘mecathon’: File exists
/content/gdrive/Shareddrives/메카톤2021여름/mecathon
<jupyter_text>### darknet git에서 가져오기<jupyter_code># clone darknet repo
# !git clone https://github.com/AlexeyAB/darknet<jupyter_output>Cloning into 'darknet'...
remote: Enumerating objects: 15298, done.[K
remote: Counting objects: 100% (9/9), done.[K
remote: Compressing objects: 100% (8/8), done.[K
remote: Total 15298 (delta 1), reused 7 (delta 1), pack-reused 15289[K
Receiving objects: 100% (15298/15298), 13.69 MiB | 5.21 MiB/s, done.
Resolving deltas: 100% (10383/10383), done.
Checking out files: 100% (2044/2044), done.
<jupyter_text>### darknet 설정 및 빌드 (opencv, gpu 설정)<jupyter_code># change makefile to have GPU and OPENCV enabled
%cd darknet
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
!sed -i 's/GPU=0/GPU=1/' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
# verify CUDA
!/usr/local/cuda/bin/nvcc --version
# make darknet (build)
!make
# upload pretrained convolutional layer weights
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-csp.conv.142
!./darknet detector train custom/obj.data custom/yolov4-csp-swish.cfg yolov4-csp-swish.conv.164 -dont_show -map
<jupyter_output><empty_output>
|
no_license
|
/train/yolov4_csp_swish/mecathon_yolov4_csp_swish.ipynb
|
IHAGI-c/mecathon_scooter_1
| 4 |
<jupyter_start><jupyter_text># **string
1)string
2)indexing in string
3) string mathod/function
4)string formating<jupyter_code>"""single quotes use inside double quotes"""
a = "welcome to 'all of you'"
print(a)
"""double quotes use inside single quotes"""
b='my hobby "is listening music"'
print(b)
a = "this 'is like' drama"
print(a)
b="i ike tiktok"
print(b)
print(type(b))
a= """jkdjkjkdvkjkf
kjfkjksjfkj"""
print(a)<jupyter_output>jkdjkjkdvkjkf
kjfkjksjfkj
<jupyter_text># /n use for new line
<jupyter_code>a="hello world \n sapna"
print(a)
b="my princess name \n saanvi"
print(b)
c=r"my cutie \n shanu"
print(c)
a="what are u doing now \n i am sleeping"
print(a)
b=R"i am soo excited \n do u know"
print(b)<jupyter_output>i am soo excited \n do u know
<jupyter_text># **indexing in string<jupyter_code>a="how are you payal"
print(len(a))
print(a[10])
print(a[-2])
print(a[8])
print(a[7])
print(a[-5])
a="radha doing some work"
print(a)
print(len(a))
print(a[-6])
print(a[7])
b="i am doing some work can u help me"
print(len(b))
print(b[-15])
print(b[17])<jupyter_output>34
k
o
<jupyter_text># **string mathod function <jupyter_code>a="hi hello"
print(len(a))
a="hi manvi how are you"
print(len(a))
b='whta is the name of this type animals'
print(len(b))
print(b[25])
print(b[-34])
# the lower()mathod returnes the string in lower case.
b="HELLO WORLD"
print(b.lower())
a="WHTSAPP GUYS HOW ARE YOU"
print(type(a))
print(a.lower())
a="I AM A PROGRAMMER"
print(a.lower())
#the upper mathod()returens the string in upper case.
a="i want to be a software doveloper"
print(a.upper())
x="hello frnds what are u doing"
print(x.upper())
# the strip mathod()remove the any whitespace from the begining or ending the string
a= "hii, hello "
print(a.strip())
z=" hello world wide web "
print(z.strip())
a=" hello what happend "
print(a.strip())
y='mr and mrs how r you'
z='i am fine'
print(y.lstrip(),z)
#lstrip
a=" where are u from dear ? "
b= "i am form india"
print(a.lstrip(),b)
#the rplace() mathod replace a string with another string
a="hello,world ,world,world"
print(a.replace("world","city"))
b="math based on the latest syllabus"
print(b.replace("math","chemistry"))
s="sapna frnds name abhi mithu poonm"
print(s.replace("poonm","abhi"))
b="my name is smayra, smyra, sappu ,sappu"
print(b.replace("smayra","sapana"))
#split =the mathod split is the string into substring if it finds
a="hello ,world "
b=a.split("l")
print(b)
x="abhi and anita"
y=x.split('a')
print(y)
a="mister india"
print(a.capitalize())
name="my name is sapna"
age=name.split("a")
print(age)
#capitalize
a = 'hello gyus'
print(a.capitalize())
a="""my name is"sapna and what"is your name"""
print(a)
a="hello world \n sapna"
print(a)
b=r"what are you doing\n dancing"
print(b)
a="hello sapna what is your name"
print(len(a))
print(a[-8])
a="what is this"
print(a.upper())
a="HELLO SAPPU "
print(a.lower())
a=" what are you saying "
print(a.strip())
a="what is name"
b="my name is pappu"
print(a.lstrip(),b)
a="hello,whole,well,well,whole"
print(a.replace("well","sappu"))
a="hii good"
b=a.split("o")
print(b)
a="here come"
print(a.capitalize())
a="what,which"
print(a.strip())<jupyter_output>what,which
<jupyter_text># **string formating<jupyter_code>a=int(input(''))
b=int(input(''))
print('my age is',a-b )
print(f'my age is ,{a-b} ,{a}')
x="this article {1} is written in {0} {2} "
print(x.format("hi","a",b))
a=13-9j
print('i am %f from codesession academy' % 34.3,12)
print('hi',a,'how r you')
print("i am %s from %s %.4f" %("sapna","alwar",12))
a=input("enter your name")
print("my name is",a)
print(f"my name is,{a}")
a="sunita write {} word in {} diary with {}"
print(a.format("some","own","pen"))
a= input("somthing is going wrong")
print("it is no possible",a)
a=int(input(" "))
b=int(input(" "))
print('my number is ', a-b )
print(f'my number is', {b} )
x="accourding to {1} my way that {2} is not good {0}"
print(x.format("sapna","hii","hello"))
print('i am %s 2.%f from codesession %s in'%(4343+9999j,10.55,"acadmay"))
a=879498
print(type(a))
print("i mean 3,%f to like a %s for my choise" %(7636764398,"puzzle"))
print("this", a, "like a numbe")
x,y,z,w = "sapna" ,"codesession" ,"344","34"
print('i am %s from %s and start %s just %s'%(x,y,z,w))
a="welcome to my hotel"
b=545-4343j
print("hi",a,b,"thnks")
print('my work is %s' %'codewrite')
print('my name is %s' % 'sapna')
a=34
print(type(a))
print('i wrote %f book today.' %a)
print('i wrote %f poem today.' %4)
x="my name is sapna rajput {2} and she likes{0} with {1} singing"
print(x.format("and","singing","new" ))
x=input("enter 1st number")
y=input("enter 2nd number")
print('my 1st number is',x)
x=int(input(''))
a=int(input(''))
print('my age is',a)
print('my age is',x-a)
print(f"my age is,{x},{y}")
print('my favourit %s is %s i love this %s'%('pink','because',10))
print('my number %s this is %f like'%(4333333333,34.44))
a,b,c,d =289,43.43,43-8j,"try"
print('my no %f uhfd %f hfdjhfkjd %s kjkdfjk jkkjkj %s'%(a,b,c,d))
print('my copy %i mentain.' % 32.44)<jupyter_output>my copy 32 mentain.
<jupyter_text># casting
<jupyter_code>a=10
b=56
c=a+b
print(c)
a=input("enter a number")
b=int(a)
c=float(b)
print(a)
print(c)
print(b)
a=input("enter a number")
b=complex(a)
print(b)
c=float(a)
print(c)
d=complex(c)
print(d)
x=input("enter 1st number")
y=float(x)
z=int(y)
print(y)
print(z)
x=int(2) #integer casting-int()
y=int(4.5)
z=int(3.5) #can't convert complex to int
print(x) #can't convert complex to flot
print(z)
print(y)
a=32.45
b=45+5j
c=int(a)
print(c)
print(b)
a=float(53)
b=float("34.34") #float casting-float()
c=float(44.3)
d=float(-34)
print(d)
print(c)
print(a)
print(b)
print(type(a),type(b),)
a=("43.34")
b=("-43")
c=("435") #########ask a question
d=("5")
print( r"a,'\n',b,'\n',c,'\n',d")
print(type(a),type(b),type(c),type(d))
a=str("455555555")
b=str("34.43")
c=str("44-4j")
d=str("4355")
print(a)
print(b)
print(c)
print(d)
print(type(a),type(b),type(c),type(d))
<jupyter_output>455555555
34.43
44-4j
4355
<class 'str'> <class 'str'> <class 'str'> <class 'str'>
<jupyter_text># operation<jupyter_code># + addition x+y
# - subtraction x-y
# * multiplication x*y
# / division x/y
# % modules x%y
# ** exponentitation x**y
# // floor division x//y
number_1 = input("enter your first number : ")
number_2 = input("enter your second number : ")
a =int(number_1)
b = int(number_2)
c = a+b
d = a-b
f =a/b
g=a%b
h=a**b
print(f"sum of {a} or {b} is = {c}")
print(f"sub of {a} or {b} is = {d}")
print(f" division of {a} or {b} is ={f}")
print(f" module of {a} or {b} is = {g}")
print(f"exponentitation {a} or {b} is = {h}")
a=int(input("enter a number"))
b=int(input("enter number"))
a=2
b=3
c=a+b
d=a-b
e=a*b
f=a/b
g=a**b
h=a//b
i=a%b
print(f"sum of {a} or {b}={c}")
print(f"sub of {a} or {b}={d}")<jupyter_output>enter a number5
enter number10
sum of 2 or 3=5
sub of 2 or 3=-1
<jupyter_text># b. python assignment operators
assigment operators are used to assign value to vribles<jupyter_code>num=45.4
name=56-8j
print(f'my number is, {num}, {name}')
x="sapna"
y=8889
print(f"my name is {x},my number is {y}")
x=32
y=323.32
print(f"my age is {x+y}")
print(f"my age is {x-y}")
a="sapna"
b="rajput"
print(f" my name is {a} {b}")<jupyter_output> my name is sapna rajput
<jupyter_text># c.python comparison operators
comparison operators are used to compare two value:<jupyter_code>a=45
b=67
print(a==b)
print(a!=b)
print(a>=b)
print(b>=a)
print(b<=a)
a=45
b=45
print(a==b)
print(a!=b)
print(a>b)
a=43
b=43
print(a==b)
print(a!=b)
print(a>=b)
print(b>=a)
print(a<=b)
print(b>=a)
== equal x==y
! not Equal x!=y
> greater than x>y
< less than x<y
>= greater than or equal to x>=y
<= less than or equal to x<=y
<jupyter_output><empty_output><jupyter_text># python logical operators
logical operater are used to combine condition statemantes <jupyter_code>and Returnes true if both statementes are true
or returnes true if one of the statementes is true
not reverse the result, returnes false if the result is true
a=23
b=34
c=67
print(a>b and b<a)
a=34
b=45
c=23
print(a<b and b>a)
a=34
b=32
c=22
print(a>b and b<c)
x=78
y=33
z=34
print(not(x<y and z<y ))
x=23
y=45
z=36
print(not(y>x and x<z))
x=45
y=67
z=12
print(not(x>z and y<z))<jupyter_output>True
<jupyter_text># logical or operation<jupyter_code>a=34
b=22
c=35
print(a<c or b<a)
a=34
b=45
c=33
print(b<a or a<c)
a=32
b=19
c=34
print(a>b or c<b)
a=34
b=90
c=12
print(not(a>c or b<c))
a=34
b=3
c=42
print(not(a>b or c<a))
a=45
b=23
c=67
print(not(a<b or b>a))
a=67
b=34
c=23
print(not(a>b or b>c))
a=18
b=18
c=10
print(a>b or b>a)<jupyter_output>False
<jupyter_text># python identity operators<jupyter_code>idenity operations are used to compare the obeject,not if they are actually
the same object,with the same memory location:
'''is return true if both veriable are not the same object x is y '''
x=44
y=44
print(x is y)
''' is not return the if both veriablr are not the same object x is not y'''
x=43
y=45
print(x is not y)
x=20
y=267
print(x is y)
print(" id x :", id(x))
print(" id y :",id(y))
x=4545
print("id x :",id(x))
print(x,y)
x=32
y=43
print(x,y)
print("id x :",id(x))
print("id x :",id(y))
x=34
x=4
print("id x :",id(x))
print("id x :",id(x))
x=34
x=20
x1=int(x)
print(id(x))
print(id(x))
print(x is x)
print(x is not x)
x=20
y=20
print(x is y)
print(x is not y)
print(id(x))
print(id(y))
x=67
z=56
y=34
print(id(x))
print(id(z))
print(id(y))
print(x is z)
print(x is not z)
x=67
y=74
print("id x :",id(x))
print("id y :",id(y))
print(x,y)<jupyter_output>id x : 1558939856
id y : 1558939968
67 74
<jupyter_text># python membership operators
membership operators are used to test if a sequence is presented in an object:
<jupyter_code>''' in returns true if a sequencee with the specified value is present in the object'''
l=[ 1,2,3,4]
e=5
print(e in l)
''' not in returns true if a sequence with the specified value is not present in the obeject'''
l=[1,2,3,4]
e=4
print(e not in l)
x="sapna is a good d girl"
y="pna"
print(y in x)
str_1="sapana"
str_2="as"
print(str_2 in str_1)
x={2332,324,344,23,43}
y=2
print(y in x)
x=342 ,54 ,35, 35 ,343, 453
y=34
print(y in x)
x="rtreterte"
y="re"
print(y in x)
x=33,34,4,3,2
y=345
print(y not in x)<jupyter_output>True
<jupyter_text># collecton datatype
there are four collection data types in the python programming language
(1)list : [squrae brackets]
(2)tuple : (round brackets)
(3)set : {curly brackets}
(4)dictionary : {"key":value,}
there are four collection data types in the python programming language.<jupyter_code>(1) list is a collection which is orderd indexed and changeable allow
duplicate members
(2)touple is a collection which is ordered indexed and unchangeble allow duplicate
members
(3)set is a collection which is unindexed ,unordered and changeable no duplicate
members
(4)dictionary is a collection which is unordered but indexed with keys,and changeble
no duplicate keys<jupyter_output><empty_output><jupyter_text># choosing a collection type it is necessary to understand the properties of that
type<jupyter_code>this_list=(2,4,5,7,"sappu",45.7)
this_touple=(2,4,5,'payl')
this_set={4,6,8}
this_dict={'sappu':1,'tanwar':2,3:3}
print(type(this_dict))
print(type(this_list))
print(this_list[3])
print(this_touple[3])
print(this_dict[3])
<jupyter_output><class 'dict'>
<class 'tuple'>
7
payl
3
<jupyter_text>1 # list<jupyter_code>a list is a collection which is ordered and changeable in python lists are
written with square brackets.
a.getting started
b.list methods
c.conditional statements on list<jupyter_output><empty_output><jupyter_text># 1(a)getting started<jupyter_code># 'creating a list'
this_list = ["apple", 4 , 'sapna' ,True, [1,2,3,4] , (10,20) , {'brand':'honda','name':'city',}]
print('this_ list:' , this_list)
print(this_list[4])
print(this_list[6])
print(this_list[-2])
print(this_list[-5][-1])
'accessing an element:'
print('element at position 0:', this_list[0])
print('element at position 5:',this_list[5])
'change item value:'
print('before:',this_list)
this_list[0]='mango'
print(id(this_list))
print('after:', this_list)
print(id(this_list))
'getting a sub list'
print("this_list:",this_list[0:3])#(start:end)
print(this_list[0:5:2])
print(this_list[0:5:3])
print(this_list[0:2])
print(this_list[2::])
name_=[(23,2),65,456.67,{8,6,9,0,9,0,9},"sapna","rajput",[2,5,56],{'12':'sappu','34':'sappuu'}]
print('name_',name_)
print(name_[4][-3])
print(name_[7])
print(name_[5][0])
print(name_[-1])
print(name_[-1]['12'])
'accessing an element'
print('elememt at position 1:',name_[1])
print('element at position 2:',name_[6])
print('element at position 3:',name_[7])
'change item value'
print('before this:',name_)
name_[5]=(4,5,5)
print('after this:',name_)
'getting a sub list'
print(name_[0:3])
print(name_[0:3:1])
print(name_[0:1])
x=["plan",'function',(32,234,4, 43),{5,54,455,6,64,4},45.66,54-6j,{'plan':'12','schime':23424,'tv':24.45}]
print(x)
print(type(x))
print(x[5])
print(x[-4])
print(x[2][-3])
print(x[2][-1])
print('element of position 1:',x[5])
print('element of position3:',x[4])
print('element of position 8:',x[-4])
print('before this:',x)
x[3]=(34,43.4,4+5j)
print('after this:',x)
x[3]=10
print(x)
print(x[0:4])
print(x[3:4:5])
print(x[1:2:3])
print(x[0:1:2])
print(x[0:3:5])<jupyter_output>['plan', 'function', (32, 234, 4, 43), {64, 4, 5, 6, 455, 54}, 45.66, (54-6j), {'plan': '12', 'schime': 23424, 'tv': 24.45}]
<class 'list'>
(54-6j)
{64, 4, 5, 6, 455, 54}
234
43
element of position 1: (54-6j)
element of position3: 45.66
element of position 8: {64, 4, 5, 6, 455, 54}
before this: ['plan', 'function', (32, 234, 4, 43), {64, 4, 5, 6, 455, 54}, 45.66, (54-6j), {'plan': '12', 'schime': 23424, 'tv': 24.45}]
after this: ['plan', 'function', (32, 234, 4, 43), (34, 43.4, (4+5j)), 45.66, (54-6j), {'plan': '12', 'schime': 23424, 'tv': 24.45}]
['plan', 'function', (32, 234, 4, 43), 10, 45.66, (54-6j), {'plan': '12', 'schime': 23424, 'tv': 24.45}]
['plan', 'function', (32, 234, 4, 43), 10]
[10]
['function']
['plan']
['plan']
<jupyter_text># list method <jupyter_code># (1)lenth
lst_=["sapna",345,34,34.45,34+6j,'rajput']
print(lst_)
print(len(lst_))
s=[43,34.53343,34-3599j,"sapna is a good girl"]
print(len(s))
# (2)append():-the append mathod add a single item to the existing list
# or
# 'to add an item to the end of the list use the append () method'
lst_=["file", "copy","most",23434,355.546,34+43j,"practics"]
lst_.append("546")
print(lst_)
print(len(lst_))
name=["sapna rajput",99,445-5j,343+665j,'rajput is the best']
name.append('loyal')
print(name)
print(len(name))
# (3)insert():-'to add an item at the specifid index use the insert() method'
# it use to add new item in list valid inde
list_=["sing",'singh','song',345,35.54,34-34j]
print(list_)
list_.insert( 4,'pool')
list_.insert(-1,34)
print(list_)
# (4)remove():-'remove item form a list'or,remove the specifide item:-
sapna=['sapna','good',4324,345-3j,344+34j,34.345,'code']
sapna.remove('good')
sapna.remove(34.345)
print(sapna)
# (5)pop method():-remove the specified index(or the last item of index is not specified)
list_=["app","roma","rahul"]
list_.pop()
list_.pop(0)
# (6)del():-'the del keyword remove the specified index:'
lst=[32,34.34,3433-9j,343,'sapna is a good','mamta']
del lst[-2]
print(lst)
print(lst[1:3:4])
this_list=['sappu','she','logic','print']
print(this_list)
del this_list[0]
print(this_list)
print(this_list,'sappu')
print(this_list[0:4])
# (7)clear:-'the clear mathod empties the list'
sapna=['adaf',3253425,2424,'klslkgj']
print(sapna)
sapna.clear()
print(sapna)
# (8)copy:-'copy a list'/use to copy a dictionary
# there are two way is to use the built in list mathod copy()
file=['maya','shanvi','neha','mohit']
# myfile=file.copy()
# print(myfile)
lst=file
print(lst)
# (9)count:-return the number of occurences of value
post=['sagar','marketing','newplan',854,435,43,43,34.4,'newplan','setup']
a=post.count(34.4)
print(a)
# (10)index:-the index() mathod returns the position at the first occurrece of
# the specified
cards=['invite',7687,98,'shadule','master']
y=cards.index(7687)
y=cards.index('invite')
print(y)
# (11)extend:-"add elements of one list to another"
fruits=['apple','graps','nudels','papaya']
veg=('potato','tomato','ladyfinger',34,344-34j,34.34)
fruits.extend(veg)
print(fruits)
# (12)reverse:-reverse the order of the list:
x=[24334,24,245,43-6j,34.345,]
x.reverse()
print(x)
game=['ludo','pubg','templerum']
game.reverse()
print(game)
# (13)sort():-sort the element in list
name=[324,234,2432,23,232,12,123,3,23,23]
name.sort()
print(name)
list=cars[::-1]
print(list)
cars=['a','b','c']
cars.sort(reverse=True)
cars.sort(reverse=False)
print(cars)
<jupyter_output>[3, 12, 23, 23, 23, 123, 232, 234, 324, 2432]
['c', 'b', 'a']
['a', 'b', 'c']
<jupyter_text># 1(c)conditional statement on list<jupyter_code># 'print all item in the list ,one by one:'
this_list=['apple','banana','cherry']
for i in this_list:
print(i)
'check if "apple" is present in the list:'
this_list=["spna","priya","salonii"]
if "spna" in this_list:
print("yes")
else:
print("no")
print()<jupyter_output>apple
banana
cherry
yes
<jupyter_text># (2)Tuple<jupyter_code>"a tuple is a collection which is ordered and unchangeable in python tuples are written in round brackets"
(a)getting started
(b)tuple method
(c)what dose not work from tuple
(d)conditional statement tuple
# (a)getting started:-'create a tuple'
lst=('pending','programme','releif',56,567.7,687-8j)
print(lst)
print(type(lst))
# "accessing an element"
print(lst[1])
print(lst[2])
'getting a sub list'
print(lst[0:3])
print(lst[0:2:3])
print(lst[0::1])
<jupyter_output>34.34
(3433-9j)
[32, 34.34, (3433-9j)]
[32]
[32, 34.34, (3433-9j), 343, 'mamta']
<jupyter_text># tuple-method/function<jupyter_code># (1)len()mathod -print the number of item in tuple
tuple_is=('number','is', 'the',245,43554,34-46j,356,536.4534 )
print(len(tuple_is))
mytuple=("my hobby is singing",35345636,356.465,68)
print(len(mytuple))
# (2)'count()mathod :- return the number of occurences of a value
tuple_side=(3453,345.4,35363,"the number of value",24-24j,'comb')
print(tuple_side.count(3453))
print(tuple_side.count('comb'))
# (3)'index()method':- return first index of value
x=('sdfjksd','rahul','diksha','rahul',324,324)
print(type(x))
print(len(x))
print(x.index(324))
# (4)'the del keyword delete the tuple completly'
this=(324,55.35,5,45,5,6,7.6,'name','sapna','tost')
del this
print(this)<jupyter_output><empty_output><jupyter_text># 2(c)what doesn't work for tuples<jupyter_code>'add item'
'once a tuple is created,you cannot add item to it tuple are unchangeable'
'change item value:'
# thistuple[3]='tiger'
# thistuple
# thistuple=('mango','papaya','orange')
# # thistuple.append('fruits') error bcoz tuple not allow
# print(thistuple)
'remove item'
'once a tuple is created you can not remove otem in tuple are unchangeable'
# y=(7678,97,44,7.78,87,'mosam')
# print(y)
# print(y.remove(87)) we also find in this case error
'del item at spasific index'
x=('saroj','komal','neelam')
# del(x[0]) also error tuple not allow
# print(x)
del(x)<jupyter_output><empty_output><jupyter_text># 2(d)conditional statement on tuple<jupyter_code>'iterate through the item and print the values:-'
# thistuple=('sappu','rathore','radhe','rani',443,34.43)
# for x in thistuple:
# print(x)
'check if "rathore" is present in this tuple:'
thistuple=("sappu","rathore","radhe")
if "sappu" in thistuple:
print("yes,'sappu' is in the name tuple")
else:
print("not in thistuple")
print()
tuple_=('ajay','rahul','rita','rashmi', 23432,234-34j,43.34,3,3,3)
for y in tuple_:
print(y)
if 'sapna' in tuple_:
print("yes 'sapna' in this tuple_")
else:
print('not in thi tuple_')
print()
tupletype=("sona rajput",34,35,55,65,65.54,45-54j,54,45,45,45,)
for x in tupletype:
print(x)
if 54 in tupletype:
print("yes")
else:
print("not")<jupyter_output>sona rajput
34
35
55
65
65.54
(45-54j)
54
45
45
45
yes
<jupyter_text># (3)set<jupyter_code>set is a collection which is unorders unindexed and changeable and
not allow duplicate members
(a)getting start
(b)set methods
(c)what doesnot work for sets
(d)conditional statements on set
"getting start"
3(a)# "create a set"
set_1={"sappu",8798,876.78,876-8j,"sappu","code"}
print(set_1)
<jupyter_output>{'code', 'sappu', 876.78, (876-8j), 8798}
<jupyter_text># 3(b)set methods function<jupyter_code># "add item to set"
# (1)add method:- add()
set_1={"sapna","manshi",2233,23,32,32,"sapna",34.34}
set_1.add(67.9)
set_1.add("dfef")
print(set_1)
(2)'add multiple items to a set,using the "update()" method'
set_1={79,9,89-8j,98+9j,89.98,"parul","code","session"}
set_2={"yes","code is here"}
set_1.update(set_2)
print(set_1)
set_2.update(set_1)
print(set_2)
# (3)"remove item"
# 'to remove an item in a set, use the "remove()", or the "discard()"
# method'
set1={"jam","trap","net","payal"}
set1.remove("net")
print(set1)
set1.remove("jam")
print(set1)
x={345,456,454,3,25,356,6,77,"rert"}
y={45.454,4545,3,88886,6,868,567}
x.remove(345)
y.remove(567)
print(x)
print(y)
# print(x+y) - error
'discard()'
set1.discard('payal')
print(set1)
x.discard(3)
print(x,y)
y.discard(567)
print(y,x)
(4)# 'you can also use the "pop()",method or remove an item, but this method
# 'will remove the last remeber the sets are unordered, so you will not
# 'know what,item that gets remove.
'the return value of the "pop()" methodis the remove the item'
the_set={"pari",234,"sanvi",45.5,"guddu"}
x=the_set.pop()
the_set.pop()
the_set.pop()
the_set.pop()
print(x)
print(the_set)
# (5)'difference() - remove all element of another set from this set.'
x_1={34,3,4,5,3,5,6}
x_2={1,3,4,53,34,5}
# x={'sappu',"raju","radha","palm"}
# x1={'test','easy','sappu'}
y_=x_2.difference(x_1)
print(y_)
# x2=x1.difference(x)
# print(x2)
'difference_update() - remove all element of another set from this set.'
'the difference_update() returns none indicating the object (set) is mutated'
a={"sap",'na','will','not','try',23,3,2,5,45}
b={'na',23,5,45,3,333,5,4 ,"test"}
c=a.difference_update(b)
print(a,'\n',b,'\n',c)
print()
(6)'intersection()- return the intersecttion of two sets an a new set.'
this_set={4,45,54,6,43,}
that_set={34,35,7,6}
inter_set=that_set.intersection(this_set)
print(inter_set)
# 'intersection_update()- return the intersection of two sets as a new set
# 'the intersection_update() return none indicating the object (set)
# is muted.
set={2,4,5,6,78,9}
set1={1,6,8,0,5,4,8,989}
this=set1.intersection_update(set)
print(set1,'\n',set,'\n',this)
print()
# (7)'union() - return the value of set as a new set.'
myset={1,23,3,4,4,}
thisset={'a','s','t','y'}
union=thisset.union(myset)
print(union)
x=myset.update(thisset)
print(x)
thisset.update(myset)
print(myset)
set_1={1,3,4,2,6,7,5}
set_2={5,6,7,'d','s','f','g','k'}
union_=set_1.union(set_2)
print(union_)
x=set_2.update(set_1)
print(x)
set_1.update(set_2)
print(set_2)
# dir(set)
# (8)'isdisjoint()' -'return True if two set have a null intersection.'
my_set={3,5,7,4,2}
your_set={4,5,3,22,5}
print("first operation:",my_set.isdisjoint(your_set))
this_set={6,8}
print("second operation:",my_set.isdisjoint(this_set))
# 'issubset()'-'report whether another set contain this set,'
x={1,4,7,8,3,2}
y={3,2,1}
# y={9,1,2} #'y' is subset of x when 'y' all eliment are in x
print(y.issubset(x))
# 'issuperset()' -'report whether another set contain another set.'
a={3,56,67,3,7,734.56,4}
b={34,34,6,7,8,99,}
print(a.issuperset(b))
a={'a','b','c',1,4,6,7}
b={'a',1,4}
print(a.issuperset(b))
# note-if b is a subset of a then a is a superset of b
(9)'get len of a set'
'to determine how many item a set has,use the"len()"method.'
myset={34,35,5,34.45,45-45j,45,"sapna",'[2,43,5,]',(23,2345,34)}
print(len(myset))
(10)'clear set'
'the "clear()" method empities the set'
your_set={35,"sapna",34-43j,34,"raj",235}
your_set.clear()
print(your_set)
(11)'delete set'
'the del keyword will delete the set completely'
myset={23,3,'a',"sapna",(23,34,43,34.3)}
# print(myset)
del myset
<jupyter_output><empty_output><jupyter_text># (3)c what dose't work for sets<jupyter_code># "indexing won't work foe set sincesets are unordered and unindexed"
# that={1,3,3,4}
# print(that[1])
'set do not support duplicate item'
this={3,3,6,4,7,8,3,"sap","na","sap"}
print(this)<jupyter_output>{3, 4, 6, 7, 8, 'sap', 'na'}
<jupyter_text># 3(d)conditional statements on sets
help access elements of a set also<jupyter_code>
'loop through this set,and print the values:'
thisset={"sapna",34,"rajput",34-5j,343.3,34+3j,"apple"}
for y in thisset:
print(y)
'check if an item present in the set'
myset={23,23,"ajlsd",345634,"koml","aashu"}
if 23 in myset:
print("yes")
else:
print("no")
<jupyter_output>34
sapna
apple
(34+3j)
rajput
(34-5j)
343.3
yes
<jupyter_text># 4 dictionaries<jupyter_code>dictionary is a collection which is unorderd,chandeable ans support
duplicate values.
in python dictionary is written with curly brackets.and they have key
and values
(a)getting start
(b)dictionary mathod
(c)control flow using dictionary<jupyter_output><empty_output><jupyter_text># (a)getting start<jupyter_code>### 'creating a list'
this_dict={"sapna":"rajput","room":"clean","pen":"book"}
print(this_dict)
'accessing an element'
a=this_dict["room"]
# a=this_dict[2] #there are indexing in "key" and "value"
# print(a)
print(a)
# 'changing and adding a value in the dictionary'
# this_dict={"more":"then","number":2435,"sapna":"math"}
# this_dict["more"]=23445
# this_dict["then"]=23-4j
# this_dict["cool"]="hot"
# print(this_dict)
<jupyter_output>{'sapna': 'rajput', 'room': 'clean', 'pen': 'book'}
clean
<jupyter_text># 3(b)dictionary method<jupyter_code>'get() accessing an element'
this_dict={"sapna":"rajput","room":"clean","pen":"book"}
a=this_dict.get('sapna')
print(a)
room=this_dict.get('room')
print(room)
'update() add item of one dict into another'
this_dict={"sapna":"rajput","room":87.8,"pen":7576}
that_dict={"shoot":'red',45545:4555555}
this_dict.update(that_dict)
print(this_dict)
'removing item from the dictionary'
'the pop() method remove the item with the specified key name:'
this_dict={"name":"sapna","mom":"dad",3453:34,34:45}
this_dict.pop("name")
print(this_dict)
# print(this_dict.values())
'popitem() - remove and return some{key,value} pair as a tuple'
_mydict={"toy":"baby","number":4899,"sapna":8478}
item=_mydict.popitem()
print(item,_mydict)
print(bool("abc"))
"keys() - a set like object providing a view on dicionary's keys"
this_dict={"kite":"like","walk":"ride","weather":2442}
print(this_dict.keys())
"values() - an object providing a view on dictionary's value"
my_dict={"name":"sapna","last":"night","view":43}
print(my_dict.values())
print(type(my_dict))
x={"23":23.4,"32":23-4j,"fly":232,"234":2}
print(x.items())
print(type(x.items))<jupyter_output>dict_values(['sapna', 'night', 43])
<class 'dict'>
dict_items([('23', 23.4), ('32', (23-4j)), ('fly', 232), ('234', 2)])
<class 'builtin_function_or_method'>
<jupyter_text># 4(c)control flow using dictionaries<jupyter_code>'print all key name in the dictionary,one by one:'
this_dict={"sapna":"code","colon":"paranthies","single":"double"}
for h in this_dict:
print(h)
'print all value in the ditionary, one by one:'
this_dict={"sapna":"mam","red":"orange","buy":"sell"}
for i in this_dict.values():
print(i)
'check if a key is there in the dictionary'
my_={"sapna":"mam","red":"orange","buy":"sell"}
if "sapna" in my_:
print("yes")
else:
print("no")
'check if a value is there in the dictionary'
if "red" in my_.values():
print('yes')
else:
print("no")
this_list=[24.5,35,345,"papa",45345,35-4j,]
'converting a datatype to another'
lst_1=["sapna",3453,"sham","rita",56,5,4,6,7,5]
tuple_=tuple(lst_1)
print(tuple_)
print(type(lst_1)),print(type(tuple_))
set2=set(lst_1)
print(set2)
print(type(set2))
lst=list(set2)
print(lst)
lst=list(tuple_)
print(lst)
print(type(lst))
set_1=set(tuple_)
print(set_1)
print(type(set_1))
tuple2=tuple(set_1)
print(tuple2)
print(type(tuple2))
'''content
1.if conditions
2.while loops
3.functions
4.practice questions'''
print()<jupyter_output><empty_output><jupyter_text># 1. if conditions<jupyter_code>python condition and if statements
python supports the usal coparision conditions from mathematics
equals: a=b
not equales: a!= b
less then: a < b
less then or equal to: a <= b
greater then: a > b
greater than or equal to: a >= b
these condition can be used in servel ways, most commonly
"if statement" and loops.
an "if statement" is written by using the if keyword.
'syntax'
if condition:
statements to be executed
elif condition:
statements to be executed
else :
statements to be executed
'basic example'
a=34
b=20
if a>b:
print("sorry a is greater then 20")
elif a<b:
print("a is less then b")
else:
print("error")
print()
''' input 2 number from the user and print out the greater'''
a=int(input('enter first number'))
b=int(input('enter second number'))
# if a>b:
# print('yes is greater then')
# elif a<b:
# print('no is not')
# else:
# print('error')
if a>b:
print(f"{a} is greater")
print(f" a and b addition is {a+b}")
if a+b==68:
print("True")
elif b>a:
print(f" {b} is greater")
else:
print(f"{a} is equal to {b}")
'''if there is only one statement to be executed,
thr if condition are statement can be written in the same line'''
a=24
b=79
if a<b:print("yes")
elif a==b:print('it is equal')
else :print('b is greater then a')
'''another way to writing one line if statements'''
a,b= 20,26 #multiple assine
print('right') if a>b else print("b")<jupyter_output>b
|
no_license
|
/.ipynb_checkpoints/Untitled-checkpoint.ipynb
|
sapna-rajput/basic-python-
| 32 |
<jupyter_start><jupyter_text>As expected, the player with the higher rating usually wins the game. However, there is a difference of 4 percent between the black and white win percentage in each scenario-Almost like white has an advantage...<jupyter_code>games_df['avrating'] = (games_df['white_rating']+games_df['black_rating'])/2
games_df.groupby('rated').agg({'avrating': 'mean'})<jupyter_output><empty_output><jupyter_text>There isn't much of a difference between player ratings in rated and unrated matches - about 13.2/1595 (less than one percent).<jupyter_code>gamesrated = games_df[games_df['rated']]
gamesnotrated = games_df[~games_df['rated']]
print('Number of unrated games that white won:', len(gamesnotrated.query("winner == 'white'")))
print('Number of unrated games that black won:', len(gamesnotrated.query("winner == 'black'")))
print('Number of unrated games that ended in a draw:', len(gamesnotrated.query("winner == 'draw'")))
labels = 'White', 'Black', 'Tie'
sizes = [len(gamesnotrated.query("winner == 'white'"))*100/len(gamesnotrated),
len(gamesnotrated.query("winner == 'black'"))*100/len(gamesnotrated),
len(gamesnotrated.query("winner == 'draw'"))*100/len(gamesnotrated)]
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
print('Number of rated games that white won:', len(gamesrated.query("winner == 'white'")))
print('Number of rated games that black won:', len(gamesrated.query("winner == 'black'")))
print('Number of rated games that ended in a draw:', len(gamesrated.query("winner == 'draw'")))
len(gamesrated)
labels = 'White', 'Black', 'Tie'
sizes = [len(gamesrated.query("winner == 'white'"))*100/len(gamesrated),
len(gamesrated.query("winner == 'black'"))*100/len(gamesrated),
len(gamesrated.query("winner == 'draw'"))*100/len(gamesrated)]
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal')
plt.show()<jupyter_output><empty_output><jupyter_text>The player playing white seems to have an obvious advantage over the player playing black, regardless of the ratings<jupyter_code>games_df.groupby('winner').agg({'turns': 'mean'})<jupyter_output><empty_output><jupyter_text>That's interesting. The matches that ended in a draw usually had more turns played than matches won by either side, as expected. However, the matches won by the player playing black seem to last more than 5 percent longer than those won by the player playing white.<jupyter_code>games_df.groupby('rated').agg({'turns':'mean'})<jupyter_output><empty_output><jupyter_text>The games that were rated generally had 7.7 more turns played per match than the unrated games<jupyter_code>games150 = games_df.sample(150)
intercept, slope = np.polynomial.polynomial.polyfit(
games150.avrating,
games150.turns,
1)
ratings = np.array([min(games150.avrating), max(games150.avrating)])
turns = intercept + slope * ratings
plt.scatter("avrating", "turns", data=games150)
plt.plot(ratings, turns, color="magenta")
plt.title("Rating vs Number of Turns")
plt.xlabel("Average rating of the players")
plt.ylabel("Number of turns played in the match")
plt.show()<jupyter_output><empty_output><jupyter_text>Even though the turns per game were higher in rated games, the average rating of the players seemed to have no correlation with the turns played. The slight slope that can be seen seems to be
a result of the lack of data at the extreme ends<jupyter_code>freq = games_df[games_df['winner']=='black']['opening_name'].value_counts()
print("Printing the frequency")
print(freq)<jupyter_output>Printing the frequency
Van't Kruijs Opening 226
Sicilian Defense 194
Sicilian Defense: Bowdler Attack 164
Scandinavian Defense 123
French Defense: Knight Variation 121
...
Four Knights Game: Spanish Variation | Symmetrical Variation #3 1
French Defense: Pelikan Variation 1
Pirc Defense: Classical Variation | Quiet System | Czech Defense 1
System: Double Duck Formation 1
Italian Game: Evans Gambit | Stone-Ware Variation 1
Name: opening_name, Length: 1145, dtype: int64
<jupyter_text>This series shows the openings with which the player playing black got the most wins.<jupyter_code>print("Black win percentage when using the Van't Kruijs Opening:",
round(len(games_df[(games_df['opening_name']=="Van't Kruijs Opening")&(games_df['winner']=='black')])*100
/len(games_df[games_df['opening_name']=="Van't Kruijs Opening"]),2))
print('Black win percentage when using the Sicilian Defense:',
round(len(games_df[(games_df['opening_name']=="Sicilian Defense")&(games_df['winner']=='black')])*100
/len(games_df[games_df['opening_name']=="Sicilian Defense"]),2))
print('Black win percentage when using other openings: ',
round(len(games_df[(games_df['winner']=='black')&(~games_df['opening_name'].isin(['Sicilian Defense', "Van't Kruijs Opening"]))])*100/len(games_df[~games_df['opening_name'].isin(['Sicilian Defense', "Van't Kruijs Opening"])]),2))
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
openings = ["Van't Kruijs", 'Sicilian', 'Other Openings']
percentages = [61.413, 54.19, len(games_df[(games_df['winner']=='black')&(~games_df['opening_name'].isin(['Sicilian Defense', "Van't Kruijs Opening"]))])*100/len(games_df[~games_df['opening_name'].isin(['Sicilian Defense', "Van't Kruijs Opening"])])]
ax.bar(openings, percentages)
plt.show()<jupyter_output><empty_output><jupyter_text>We can see that the Van't Kruijs and Sicilain Defense openings provide black with a statistical advantage over the other openings<jupyter_code>freq = games_df[games_df['winner']=='white']['opening_name'].value_counts()
print("Printing the frequency")
print(freq)
print('White win percentage when using the Scandinavian Defense: Mieses-Kotroc Variation:',
round(len(games_df[(games_df['opening_name']=="Scandinavian Defense: Mieses-Kotroc Variation")&(games_df['winner']=='white')])*100
/len(games_df[games_df['opening_name']=="Scandinavian Defense: Mieses-Kotroc Variation"]), 2))
print('White win percentage when using the Scotch Game:',
round(len(games_df[(games_df['opening_name']=="Scotch Game")&(games_df['winner']=='white')])*100
/len(games_df[games_df['opening_name']=="Scotch Game"]), 2))
print('White win percentage when using other openings:',
round(len(games_df[(games_df['winner']=='white')&(~games_df['opening_name'].isin(['Scandinavian Defense: Mieses-Kotroc Variation', "Scotch Game"]))])*100/len(games_df[~games_df['opening_name'].isin(['Scandinavian Defense: Mieses-Kotroc Variation', "Scotch Game"])]), 2))
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
openings = ["Scandinavian Defense", 'Scotch Game', 'Other Openings']
percentages = [63.32, 53.506, len(games_df[(games_df['winner']=='white')&(~games_df['opening_name'].isin(['Scandinavian Defense: Mieses-Kotroc Variation', "Scotch Game"]))])*100/len(games_df[~games_df['opening_name'].isin(['Scandinavian Defense: Mieses-Kotroc Variation', "Scotch Game"])])]
ax.bar(openings, percentages)
plt.show()<jupyter_output><empty_output>
|
no_license
|
/ChessMatches.ipynb
|
ShauryaJeloka/Jeloka_pyclass
| 8 |
<jupyter_start><jupyter_text>### Data input<jupyter_code># change to location of corpus, generated by ProjectDDI solution
filePath_corpus = 'd:/share/private/ddi/corpus.csv'
data_corpus = pd.read_csv(filePath_corpus, sep = ',')
X = data_corpus[data_corpus.columns[0:3016]].values
Y = data_corpus[data_corpus.columns[3016:]].values
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.20, random_state=42)
Y_nn = np.zeros(shape = [X.shape[0], 2], dtype=int)
for i in range(X.shape[0]):
if (Y[i][0] == 1):
Y_nn [i][0] = 1
else:
Y_nn [i][1] = 1
X_train_nn, X_test_nn, Y_train_nn, Y_test_nn = train_test_split(X, Y_nn, test_size=0.20, random_state=42)
num = X.shape[0]
print('Corpus size is {0}'.format(num))
indexTrain0 = np.where(Y_train == -1)[0]
indexTrain1 = np.where(Y_train == 1)[0]
indexTest0 = np.where(Y_test == -1)[0]
indexTest1 = np.where(Y_test == 1)[0]
print('Positive count {0} + {1}'.format(len(indexTrain1), len(indexTest1)))
print('Negative count {0} + {1}'.format(len(indexTrain0), len(indexTest0)))<jupyter_output>Corpus size is 365981
Positive count 91462 + 22941
Negative count 201322 + 50256
<jupyter_text>### SVM<jupyter_code>num_features = 3016
# SVM model
x = tf.placeholder(dtype=tf.float32, shape=[None, num_features])
y = tf.placeholder(dtype=tf.float32, shape=[None,1])
W = tf.Variable(tf.ones([num_features, 1]))
b = tf.Variable(tf.zeros([1]))
y_raw = tf.matmul(x, W) + b
# Optimization and train step
regularization_loss = tf.reduce_sum(tf.square(W))
loss = tf.reduce_sum(tf.maximum(tf.zeros_like(y), 1 - y * y_raw));
svm_loss = 0.1 * regularization_loss + loss;
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(svm_loss)
# Evaluation.
predicted_class = tf.sign(y_raw);
correct_prediction = tf.equal(y, predicted_class)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, dtype=tf.float32))
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(10):
sess.run(train_step, feed_dict={x: X_train, y: Y_train})
print('loss: {0}'.format(sess.run(svm_loss, feed_dict={x: X_train, y: Y_train})))
print('accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: X_train, y: Y_train})))
W_calc = sess.run(W, feed_dict={x: X_train, y: Y_train})
b_calc = sess.run(b, feed_dict={x: X_train, y: Y_train})
print(W_calc)
print(b_calc)
def getMetrics(W, b, X, Y):
m = np.matrix([[0, 0], [0, 0]])
b_current = X.dot(W)
print(b_current.shape)
num = X.shape[0]
for i in range(num):
label = 1
if (b_current[i] < b):
label = -1
x_index = 0
if (Y[i] == -1):
x_index = 1
y_index = 0
if (label == -1):
y_index = 1
m[x_index, y_index] += 1
print (' Predicted postive Predicted negative')
print ('True positive {0} {1}'.format(m[0,0], m[0,1]))
print ('True negative {0} {1}'.format(m[1,0], m[1,1]))
precision = (m [0,0] / (m[0,0] + m[1, 0])) * 100
recall = (m [0, 0] / (m[0, 0] + m [0, 1])) * 100
acc = ((m [0, 0] + m[1, 1]) / num) * 100
print('Precision: {0}'.format(precision))
print('Recall: {0}'.format(recall))
print('Accuracy: {0}'.format(acc))
getMetrics(W_calc, b_calc, X_train, Y_train)
getMetrics(W_calc, b_calc, X_test, Y_test)
W_metric = np.ones((num_features, 1))
b_metrics = np.zeros(1)
getMetrics(W_metric, b_metrics, X_test, Y_test)<jupyter_output>(46134, 1)
Predicted postive Predicted negative
True positive 22672 0
True negative 23462 0
Precision: 49.143798500021674
Recall: 100.0
Accuracy: 49.143798500021674
<jupyter_text>### Neural network<jupyter_code># Parameters
learning_rate = 0.1
num_steps = 100
display_step = 10
num_features = 3016
# Network Parameters
n_hidden_1 = 512
n_hidden_2 = 128
num_classes = 2
# tf Graph input
x = tf.placeholder("float", [None, num_features])
y = tf.placeholder("float", [None, num_classes])
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([num_features, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, num_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([num_classes]))
}
# Create model
def neural_net(x):
# Hidden fully connected layer with 256 neurons
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
# Hidden fully connected layer with 256 neurons
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
# Output fully connected layer with a neuron for each class
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Construct model
logits = neural_net(x)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
predicted = tf.argmax(logits, 1)
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
for step in range(1, num_steps+1):
# Run optimization op (backprop)
sess.run(train_op, feed_dict={x: X_train_nn, y: Y_train_nn})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={x: X_train_nn, y: Y_train_nn})
print('Step {0}, Loss= {1} Accuracy = {2}'.format(step, loss, acc))
print("Optimization Finished!")
print('Test accurracy {0}'.format(sess.run(accuracy, feed_dict={x: X_test_nn,y: Y_test_nn})))
Y_train_pred = sess.run(predicted, feed_dict={x: X_train_nn, y: Y_train_nn})
Y_test_pred = sess.run(predicted, feed_dict={x: X_test_nn, y: Y_test_nn})
def NN_Metrics(Y, Y_pred):
m = np.matrix([[0, 0], [0, 0]])
for i in range(Y.shape[0]):
if (Y[i][0] == 1):
trueValue = 0
else:
trueValue = 1
m[trueValue, Y_pred[i]] += 1
print (' Predicted postive Predicted negative')
print ('True positive {0} {1}'.format(m[0,0], m[0,1]))
print ('True negative {0} {1}'.format(m[1,0], m[1,1]))
precision = (m [0,0] / (m[0,0] + m[1, 0])) * 100
recall = (m [0, 0] / (m[0, 0] + m [0, 1])) * 100
acc = ((m [0, 0] + m[1, 1]) / Y.shape[0]) * 100
print('Precision: {0}'.format(precision))
print('Recall: {0}'.format(recall))
print('Accuracy: {0}'.format(acc))
NN_Metrics(Y_train_nn, Y_train_pred)
NN_Metrics(Y_test_nn, Y_test_pred)<jupyter_output> Predicted postive Predicted negative
True positive 24050 67412
True negative 13032 188290
Precision: 64.85626449490319
Recall: 26.295073363801357
Accuracy: 72.52445488824526
Predicted postive Predicted negative
True positive 6013 16928
True negative 3311 46945
Precision: 64.4894894894895
Recall: 26.21071444139314
Accuracy: 72.34995969780182
|
no_license
|
/PythonScripts/.ipynb_checkpoints/SVM and NN for DDI with TensorFlow-checkpoint.ipynb
|
andrejkoilic/ddi
| 3 |
<jupyter_start><jupyter_text># Sentiment Analysis and the Dataset
:label:`sec_sentiment`
Text classification is a common task in natural language processing, which transforms a sequence of text of indefinite length into a category of text. It is similar to the image classification, the most frequently used application in this book, e.g., :numref:`sec_naive_bayes`. The only difference is that, rather than an image, text classification's example is a text sentence.
This section will focus on loading data for one of the sub-questions in this field: using text sentiment classification to analyze the emotions of the text's author. This problem is also called sentiment analysis and has a wide range of applications. For example, we can analyze user reviews of products to obtain user satisfaction statistics, or analyze user sentiments about market conditions and use it to predict future trends.
<jupyter_code>import os
from mxnet import np, npx
from d2l import mxnet as d2l
npx.set_np()<jupyter_output><empty_output><jupyter_text>## The Sentiment Analysis Dataset
We use Stanford's [Large Movie Review Dataset](https://ai.stanford.edu/~amaas/data/sentiment/) as the dataset for sentiment analysis. This dataset is divided into two datasets for training and testing purposes, each containing 25,000 movie reviews downloaded from IMDb. In each dataset, the number of comments labeled as "positive" and "negative" is equal.
### Reading the Dataset
We first download this dataset to the "../data" path and extract it to "../data/aclImdb".
<jupyter_code>#@save
d2l.DATA_HUB['aclImdb'] = (
'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz',
'01ada507287d82875905620988597833ad4e0903')
data_dir = d2l.download_extract('aclImdb', 'aclImdb')<jupyter_output><empty_output><jupyter_text>Next, read the training and test datasets. Each example is a review and its corresponding label: 1 indicates "positive" and 0 indicates "negative".
<jupyter_code>#@save
def read_imdb(data_dir, is_train):
data, labels = [], []
for label in ('pos', 'neg'):
folder_name = os.path.join(data_dir, 'train' if is_train else 'test',
label)
for file in os.listdir(folder_name):
with open(os.path.join(folder_name, file), 'rb') as f:
review = f.read().decode('utf-8').replace('\n', '')
data.append(review)
labels.append(1 if label == 'pos' else 0)
return data, labels
train_data = read_imdb(data_dir, is_train=True)
print('# trainings:', len(train_data[0]))
for x, y in zip(train_data[0][:3], train_data[1][:3]):
print('label:', y, 'review:', x[0:60])<jupyter_output># trainings: 25000
label: 1 review: Normally the best way to annoy me in a film is to include so
label: 1 review: The Bible teaches us that the love of money is the root of a
label: 1 review: Being someone who lists Night of the Living Dead at number t
<jupyter_text>### Tokenization and Vocabulary
We use a word as a token, and then create a dictionary based on the training dataset.
<jupyter_code>train_tokens = d2l.tokenize(train_data[0], token='word')
vocab = d2l.Vocab(train_tokens, min_freq=5, reserved_tokens=['<pad>'])
d2l.set_figsize()
d2l.plt.hist([len(line) for line in train_tokens], bins=range(0, 1000, 50));<jupyter_output><empty_output><jupyter_text>### Padding to the Same Length
Because the reviews have different lengths, so they cannot be directly combined into minibatches. Here we fix the length of each comment to 500 by truncating or adding "<unk>" indices.
<jupyter_code>num_steps = 500 # sequence length
train_features = np.array([
d2l.truncate_pad(vocab[line], num_steps, vocab['<pad>'])
for line in train_tokens])
print(train_features.shape)<jupyter_output>(25000, 500)
<jupyter_text>### Creating the Data Iterator
Now, we will create a data iterator. Each iteration will return a minibatch of data.
<jupyter_code>train_iter = d2l.load_array((train_features, train_data[1]), 64)
for X, y in train_iter:
print('X:', X.shape, ', y:', y.shape)
break
print('# batches:', len(train_iter))<jupyter_output>X: (64, 500) , y: (64,)
# batches: 391
<jupyter_text>## Putting All Things Together
Last, we will save a function `load_data_imdb` into `d2l`, which returns the vocabulary and data iterators.
<jupyter_code>#@save
def load_data_imdb(batch_size, num_steps=500):
data_dir = d2l.download_extract('aclImdb', 'aclImdb')
train_data = read_imdb(data_dir, True)
test_data = read_imdb(data_dir, False)
train_tokens = d2l.tokenize(train_data[0], token='word')
test_tokens = d2l.tokenize(test_data[0], token='word')
vocab = d2l.Vocab(train_tokens, min_freq=5)
train_features = np.array([
d2l.truncate_pad(vocab[line], num_steps, vocab['<pad>'])
for line in train_tokens])
test_features = np.array([
d2l.truncate_pad(vocab[line], num_steps, vocab['<pad>'])
for line in test_tokens])
train_iter = d2l.load_array((train_features, train_data[1]), batch_size)
test_iter = d2l.load_array((test_features, test_data[1]), batch_size,
is_train=False)
return train_iter, test_iter, vocab<jupyter_output><empty_output>
|
no_license
|
/Training/mxnet/chapter_natural-language-processing-applications/sentiment-analysis-and-dataset.ipynb
|
Nathan-Faganello/PRe
| 7 |
<jupyter_start><jupyter_text># XBRL Reader para fondos españoles
El objetivo de este notebook es desarrollar un parser de documentos XBRL remitidos a la CNMV por fondos de inversión en España.
## Table of contents
* Libraries import
* XBRL document import
## Libraries import<jupyter_code>%matplotlib inline
import pandas as pd
import numpy as np
import lxml
from lxml import etree
import datetime as dt
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>## XBRL document import
Comment and uncomment accordingly<jupyter_code>tree = etree.parse('../data/az_valor_iberia_H1_17.XML')
tree = etree.parse('../data/bestinver_iberia_H1_17.XML')
tree = etree.parse('../data/cobas_iberia_H1_17.XML')
root = tree.getroot()
gestora = 'AZ VALOR'
fondo = 'AZ VALOR IBERIA'
gestora = 'BESTINVER'
fondo = 'BESTINVER BOLSA'
gestora = 'COBAS'
fondo = 'COBAS IBERIA'
import xml.etree.ElementTree as ET
tree = ET.parse('../data/az_valor_iberia_H1_17.XML')
root = tree.getroot()
import untangle
obj = untangle.parse('../data/az_valor_iberia_H1_17.XML')
a = obj.xbrli_xbrl.xbrli_context[0]
companies = obj.xbrli_xbrl.iic_fim_InformeFIM.iic_fim_DatosFondoCompartimentoFIM.iic_fim_InversionesFinancierasFIM.iic_com_InversionesFinancierasValorEstimado.iic_com_InversionesFinancierasInterior.iic_com_InversionesFinancierasRVCotizada
a, b = None, None
for c in companies:
for b in c.iic_com_InversionesFinancierasImporte:
element = b.iic_com_InversionesFinancierasPorcentaje
value = b.iic_com_InversionesFinancierasValor
print(element.get_attribute('contextRef'))
if element.get_attribute('contextRef').endswith('ia'):
print(element.cdata)
a.iic_com_InversionesFinancierasPorcentaje
comp[0]<jupyter_output><empty_output><jupyter_text>## Get dates<jupyter_code>class XBRLParser(object):
STOCK_KEY = "{http://www.cnmv.es/iic/com/1-2009/2009-03-31}"
KEY_CTX = "{http://www.xbrl.org/2003/instance}"
def __init__(self, management, fund_name, xbrl_file):
"""
* management: Fund management company e.g. AZ Valor, Cobas...
* fund_name: Fund name
* xbrl_file: Path to the XBRL file to parse.
"""
self.management = management
self.fund_name = fund_name
self.xbrl_file = xbrl_file
def set_root(self):
"""Open the XBRL file with lxml
"""
tree = etree.parse(self.xbrl_file)
self.root = tree.getroot()
def set_dates(self):
"""Get initial and final dates of the period
"""
context = self.root.find("" + self.KEY_CTX + "context")
start_date_str = context.getchildren()[1].getchildren()[0].text
end_date_str = context.getchildren()[1].getchildren()[1].text
self.start_date = dt.datetime.strptime(start_date_str, "%Y-%m-%d")
self.end_date = dt.datetime.strptime(end_date_str, "%Y-%m-%d")
def parse_stocks(self):
"""Parse stocks
"""
stocks = self.root.findall(".//" + self.STOCK_KEY + "InversionesFinancierasRVCotizada")
pandas_list = []
for stock in stocks:
dict_aux = {}
# Meta information
dict_aux['CodigoISIN'] = stock.find(self.STOCK_KEY + "CodigoISIN").text
dict_aux['Name'] = stock.find(self.STOCK_KEY + "InversionesFinancierasDescripcion").text.split("|")[1]
importes = stock.findall(self.STOCK_KEY + "InversionesFinancierasImporte")
t_key = importes[0].find(self.STOCK_KEY + "InversionesFinancierasValor").get('contextRef')
if t_key.endswith('ia'):
current = importes[0]
try:
before = importes[1]
except:
before = None
else:
before = importes[0]
try:
current = importes[1]
except:
current = None
# Importes
try:
dict_aux['Importe_0'] = float(before.find(self.STOCK_KEY + "InversionesFinancierasValor").text)
dict_aux['Percent_0'] = float(before.find(self.STOCK_KEY + "InversionesFinancierasPorcentaje").text)
except:
dict_aux['Importe_0'] = 0.0
dict_aux['Percent_0'] = 0.0
try:
dict_aux['Importe_1'] = float(current.find(self.STOCK_KEY + "InversionesFinancierasValor").text)
dict_aux['Percent_1'] = float(current.find(self.STOCK_KEY + "InversionesFinancierasPorcentaje").text)
except:
dict_aux['Importe_1'] = 0.0
dict_aux['Percent_1'] = 0.0
# Additional information
dict_aux['start_date'] = self.start_date
dict_aux['end_date'] = self.end_date
dict_aux['fund'] = self.fund_name
dict_aux['manager'] = self.management
pandas_list.append(dict_aux)
self.df = pd.DataFrame(pandas_list)
return self.df
def parse_xbrl(self):
"""Pipeline method
"""
self.set_root()
self.set_dates()
df = self.parse_stocks()
return df
cobas = XBRLParser('COBAS', 'COBAS IBERIA', './data/cobas_iberia_H1_17.XML')
df1 = cobas.parse_xbrl()
azvalor = XBRLParser('AZ Valor', 'AZ Valor IBERIA', './data/az_valor_iberia_H1_17.XML')
df2 = azvalor.parse_xbrl()
bestinver = XBRLParser('Bestinver', 'Bestinver Bolsa', './data/bestinver_iberia_H1_17.XML')
df3 = bestinver.parse_xbrl()
## CONCAT ALL funds
df = pd.concat([df1, df2, df3])
def plot_fund_dist(fund_name, df):
fig = plt.figure(figsize=(15,10))
select = df[df['fund']==fund_name][['Percent_1', 'Percent_0', 'Name']].sort_values('Percent_1', ascending=False)
ind = np.arange(select.shape[0])
width = 0.35
p1 = plt.bar(ind, select['Percent_1'], width)
p2 = plt.bar(ind+width, select['Percent_0'], width)
plt.ylabel('PCT in fund', fontsize=14)
plt.xticks(ind+width, select['Name'], rotation=90, fontsize=12)
plt.title(fund_name, fontsize=14)
plt.grid()
plt.legend(('EoP','BoP'))
plot_fund_dist('COBAS IBERIA', df)
plot_fund_dist('AZ Valor IBERIA', df)
plot_fund_dist('Bestinver Bolsa', df)<jupyter_output><empty_output>
|
permissive
|
/fund_app/notebooks/XBRL Parser.ipynb
|
vioquedu/Spanish-funds
| 3 |
<jupyter_start><jupyter_text>
Welcome to Colaboratory!
Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.
With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser.<jupyter_code>#@title Introducing Colaboratory { display-mode: "form" }
#@markdown This 3-minute video gives an overview of the key features of Colaboratory:
from IPython.display import YouTubeVideo
YouTubeVideo('inN8seMm7UI', width=600, height=400)<jupyter_output><empty_output><jupyter_text>## Getting Started
The document you are reading is a [Jupyter notebook](https://jupyter.org/), hosted in Colaboratory. It is not a static page, but an interactive environment that lets you write and execute code in Python and other languages.
For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:<jupyter_code>seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day<jupyter_output><empty_output><jupyter_text>To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter".
All cells modify the same global state, so variables that you define by executing a cell can be used in other cells:<jupyter_code>seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week<jupyter_output><empty_output><jupyter_text>For more information about working with Colaboratory notebooks, see [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb).
## More Resources
Learn how to make the most of Python, Jupyter, Colaboratory, and related tools with these resources:
### Working with Notebooks in Colaboratory
- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)
- [Guide to Markdown](/notebooks/markdown_guide.ipynb)
- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)
- [Saving and loading notebooks in GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)
- [Interactive forms](/notebooks/forms.ipynb)
- [Interactive widgets](/notebooks/widgets.ipynb)
-
[TensorFlow 2 in Colab](/notebooks/tensorflow_version.ipynb)
### Working with Data
- [Loading data: Drive, Sheets, and Google Cloud Storage](/notebooks/io.ipynb)
- [Charts: visualizing data](/notebooks/charts.ipynb)
- [Getting started with BigQuery](/notebooks/bigquery.ipynb)
### Machine Learning Crash Course
These are a few of the notebooks from Google's online Machine Learning course. See the [full course website](https://developers.google.com/machine-learning/crash-course/) for more.
- [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb)
- [Tensorflow concepts](/notebooks/mlcc/tensorflow_programming_concepts.ipynb)
- [First steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)
- [Intro to neural nets](/notebooks/mlcc/intro_to_neural_nets.ipynb)
- [Intro to sparse data and embeddings](/notebooks/mlcc/intro_to_sparse_data_and_embeddings.ipynb)
### Using Accelerated Hardware
- [TensorFlow with GPUs](/notebooks/gpu.ipynb)
- [TensorFlow with TPUs](/notebooks/tpu.ipynb)## Machine Learning Examples: Seedbank
To see end-to-end examples of the interactive machine learning analyses that Colaboratory makes possible, check out the [Seedbank](https://research.google.com/seedbank/) project.
A few featured examples:
- [Neural Style Transfer](https://research.google.com/seedbank/seed/neural_style_transfer_with_tfkeras): Use deep learning to transfer style between images.
- [EZ NSynth](https://research.google.com/seedbank/seed/ez_nsynth): Synthesize audio with WaveNet auto-encoders.
- [Fashion MNIST with Keras and TPUs](https://research.google.com/seedbank/seed/fashion_mnist_with_keras_and_tpus): Classify fashion-related images with deep learning.
- [DeepDream](https://research.google.com/seedbank/seed/deepdream): Produce DeepDream images from your own photos.
- [Convolutional VAE](https://research.google.com/seedbank/seed/convolutional_vae): Create a generative model of handwritten digits.<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as f
%matplotlib inline
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
#The weights of steers in a herd are distributed normally. The variance is 40,000 and the mean steer weight is 1300 lbs.
#Find the probability that the weight of a randomly selected steer is greater than 979 lbs. (Round your answer to 4 decimal places)
round(f.norm.sf(979, 1300, ((40000)**0.5)),4)
#SVGA monitors manufactured by TSI Electronics have life spans that have a normal distribution with a variance of 1,960,000
#and a mean life span of 11,000 hours. If a SVGA monitor is selected at random, find the probability that the life span of the
#monitor will be more than 8340 hours. (Round your answer to 4 decimal places)
round(f.norm.sf(8340, 11000, ((1960000)**0.5)),4)
#Suppose the mean income of firms in the industry for a year is 80 million dollars with a standard deviation of 3 million dollars.
#If incomes for the industry are distributed normally, what is the probability that a randomly selected firm will earn between 83 and 85 million dollars?
#(Round your answer to 4 decimal places)
low_bound = f.norm.sf(83, 80, 3)
high_bound = f.norm.sf(85, 80, 3)
round(low_bound - high_bound, 4)
#Suppose GRE Verbal scores are normally distributed with a mean of 456 and a standard deviation of 123.
#A university plans to offer tutoring jobs to students whose scores are in the top 14%.
#What is the minimum score required for the job offer? Round your answer to the nearest whole number, if necessary.
round(f.norm.isf(0.14, 456, 123))
#The lengths of nails produced in a factory are normally distributed with a mean of 6.13 centimeters and a standard deviation of 0.06 centimeters.
#Find the two lengths that separate the top 7% and the bottom 7%. These lengths could serve as limits used to identify which nails should be rejected.
#Round your answer to the nearest hundredth, if necessary.
round(f.norm.ppf(0.07, 6.13, 0.06),2)
print()
round(f.norm.isf(0.07, 6.13, 0.06),2)
print()
#An English professor assigns letter grades on a test according to the following scheme.
# A: Top 13% of scores
# B: Scores below the top 13% and above the bottom 55%
# C: Scores below the top 45% and above the bottom 20%
# D: Scores below the top 80% and above the bottom 9%
# F: Bottom 9% of scores
# Scores on the test are normally distributed with a mean of 78.8 and a standard deviation of 9.8.
#Find the numerical limits for a C grade. Round your answers to the nearest whole number, if necessary.
round(f.norm.ppf(0.2, 78.8, 9.8))
print()
round(f.norm.isf(0.45, 78.8, 9.8))
print()
#Suppose ACT Composite scores are normally distributed with a mean of 21.2 and a standard deviation of 5.4.
#A university plans to admit students whose scores are in the top 45%. What is the minimum score required for admission?
#Round your answer to the nearest tenth, if necessary.
round(f.norm.isf(0.45, 21.2, 5.4),1)
#Consider the probability that less than 11 out of 151 students will not graduate on time.
#Assume the probability that a given student will not graduate on time is 9%.
#Approximate the probability using the normal distribution. (Round your answer to 4 decimal places.)
round(f.binom.cdf(10, 151, 0.09),4)
#The mean lifetime of a tire is 48 months with a standard deviation of 7.
#If 147 tires are sampled, what is the probability that the mean of the sample would be greater than 48.83 months?
#(Round your answer to 4 decimal places)
round(f.norm.sf(48.83, 48, (7/(147)**0.5)),4)
#The quality control manager at a computer manufacturing company believes that the mean life of a computer is 91 months,
#with a standard deviation of 10. If he is correct, what is the probability that the mean of a sample of 68 computers
#would be greater than 93.54 months? (Round your answer to 4 decimal places)
round(f.norm.sf(93.54, 91, (10/(68)**0.5)),4)
#A director of reservations believes that 7% of the ticketed passengers are no-shows.
#If the director is right, what is the probability that the proportion of no-shows in a sample of 540 ticketed passengers
#would differ from the population proportion by less than 3%? (Round your answer to 4 decimal places)
round(f.norm.cdf(0.1, 0.07, (0.07*0.93/540)**0.5) - f.norm.cdf(0.04, 0.07, (0.07*0.93/540)**0.5),4)
#A bottle maker believes that 23% of his bottles are defective. If the bottle maker is accurate, what is the
#probability that the proportion of defective bottles in a sample of 602 bottles would differ from the population proportion by greater than 4%?
#(Round your answer to 4 decimal places)
round(1 - round(f.norm.cdf(0.27, 0.23, (0.23*0.77/602)**0.5) - f.norm.cdf(0.19, 0.23, (0.23*0.77/602)**0.5),4),4)
#A research company desires to know the mean consumption of beef per week among males over age 48.
#Suppose a sample of size 208 is drawn with x_bar = 3.9. Assume s = 0.8 .
#Construct the 80% confidence interval for the mean number of lb. of beef per week among males over 48.
#(Round your answers to 1 decimal place)
round(3.9 - f.norm.ppf(0.9)*0.8/(208**0.5),1)
print()
round(3.9 + f.norm.ppf(0.9)*0.8/(208**0.5),1)
print()
#An economist wants to estimate the mean per capita income (in thousands of dollars) in a major city in California.
#Suppose a sample of size 7472 is drawn with x_bar = 16.6. Assume s = 11.
#Construct the 98% confidence interval for the mean per capita income. (Round your answers to 1 decimal place)
round(16.6 - f.norm.ppf(0.99)*11/(7472**0.5),1)
print()
round(16.6 + f.norm.ppf(0.99)*11/(7472**0.5),1)
print()
#Find the value of t such that 0.05 of the area under the curve is to the left of t.
#Assume the degrees of freedom equals 26.
#Upper-right graph
round(f.t.ppf(0.05, 26),4)
import numpy as np
#The following measurements ( in picocuries per liter ) were recorded by a set of helium gas detectors installed in a laboratory facility:
#383.6, 347.1, 371.9, 347.6, 325.8, 337
#Using these measurements, construct a 90% confidence interval for the mean level of helium gas present in the facility.
#Assume the population is normally distributed.
#Step 1. Calculate the sample mean for the given sample data. (Round answer to 2 decimal places)
#Step 2. Calculate the sample standard deviation for the given sample data. (Round answer to 2 decimal places)
#Step 3. Find the critical value that should be used in constructing the confidence interval. (Round answer to 3 decimal place.
#Step 4. Construct the 90% confidence interval. (Round answer to 2 decimal places)
s = np.array([383.6, 347.1, 371.9, 347.6, 325.8, 337])
round(np.mean(s),2)
print()
round(np.std(s),2)
print()
round(f.t.ppf(0.05, 5),3)
print()
round(f.t.ppf(0.95, 5),3)
print()
round(np.mean(s) - (f.norm.ppf(0.95)*np.std(s)/(6**0.5)),2)
print()
round(np.mean(s) + (f.norm.ppf(0.95)*np.std(s)/(6**0.5)),2)
print()
#A random sample of 16 fields of spring wheat has a mean yield of 46.4 bushels per acre and standard deviation of 2.45 bushels per acre.
#Determine the 80% confidence interval for the true mean yield. Assume the population is normally distributed.
#Step 1. Find the critical value that should be used in constructing the confidence interval. (Round answer to 3 decimal places)
#Step 2. Construct the 80% confidence interval. (Round answer to 1 decimal place)
round(f.t.ppf(0.1, 15),3)
print()
round(f.t.ppf(0.9, 15),3)
print()
round(46.4 - (f.norm.ppf(0.9)*2.45/(16**0.5)),1)
print()
round(46.4 + (f.norm.ppf(0.9)*2.45/(16**0.5)),1)
print()
#A toy manufacturer wants to know how many new toys children buy each year. She thinks the mean is 8 toys per year.
#Assume a previous study found the standard deviation to be 1.9.
#How large of a sample would be required in order to estimate the mean number of toys bought per child at the 99% confidence level
#with an error of at most 0.13 toys? (Round your answer up to the next integer)
z = f.norm.isf(0.005, 0, 1)
print(z)
round(((z*1.9/0.13)**2))
print()
#A research scientist wants to know how many times per hour a certain strand of bacteria reproduces.
#He believes that the mean is 12.6. Assume the variance is known to be 3.61.
#How large of a sample would be required in order to estimate the mean number of reproductions per hour at the 95% confidence level
#with an error of at most 0.19 reproductions?
#(Round your answer up to the next integer)
z = f.norm.isf(0.025, 0, 1)
print(z)
a = (3.61)**0.5
print(a)
round((z*a/0.19)**2)
print()
#The state education commission wants to estimate the fraction of tenth grade students that have reading skills at or below the eighth grade level.
#Step 1. Suppose a sample of 2089 tenth graders is drawn. Of the students sampled, 1734 read above the eighth grade level.
#Using the data, estimate the proportion of tenth graders reading at or below the eighth grade level.
#(Write your answer as a fraction or a decimal number rounded to 3 decimal places)
#Step 2. Suppose a sample of 2089 tenth graders is drawn. Of the students sampled, 1734 read above the eighth grade level.
#Using the data, construct the 98% confidence interval for the population proportion of tenth graders reading at or below the eighth grade level.
#(Round your answers to 3 decimal places)
p = round((2089 - 1734)/2089,3)
print(p)
z = f.norm.isf(0.01, 0, 1)
print(z)
round(p -(z * (p*(1-p)/2089)**0.5),3)
print()
round(p +(z * (p*(1-p)/2089)**0.5),3)
print()
#An environmentalist wants to find out the fraction of oil tankers that have spills each month.
#Step 1. Suppose a sample of 474 tankers is drawn. Of these ships, 156 had spills.
#Using the data, estimate the proportion of oil tankers that had spills. (Write your answer as a fraction or a decimal number rounded to 3 decimal places)
#Step 2. Suppose a sample of 474 tankers is drawn. Of these ships, 156 had spills.
#Using the data, construct the 95% confidence interval for the population proportion of oil tankers that have spills each month.
#(Round your answers to 3 decimal places)
p = 156/474
print(p)
z = f.norm.isf(0.025, 0, 1)
print(z)
round(p -(z * (p*(1-p)/474)**0.5),3)
print()
round(p +(z * (p*(1-p)/474)**0.5),3)
print()<jupyter_output>0.3291139240506329
1.9599639845400545
|
no_license
|
/Copy_of_Welcome_To_Colaboratory.ipynb
|
tponnada/datasciencecoursera
| 4 |
<jupyter_start><jupyter_text><jupyter_code>import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline<jupyter_output><empty_output><jupyter_text>##### Importing Data <jupyter_code>url="http://bit.ly/w-data"<jupyter_output><empty_output><jupyter_text>##### Storing Data in CSV file<jupyter_code>student_marks=pd.read_csv(url)<jupyter_output><empty_output><jupyter_text>##### Checking the Data<jupyter_code>student_marks.head(15)
student_marks.describe()<jupyter_output><empty_output><jupyter_text>##### Plotting the graph<jupyter_code>student_marks.plot(x='Hours' , y='Scores',kind='bar',figsize=(12,8))
plt.title('Hours Vs Scores')
plt.xlabel('Hours')
plt.ylabel('Scores')
plt.show()
x=student_marks.iloc[:,:-1].values
y=student_marks.iloc[:,1].values<jupyter_output><empty_output><jupyter_text>##### Training and Testing the model<jupyter_code>from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=0)
from sklearn.linear_model import LinearRegression
linear_regressor=LinearRegression()
linear_regressor.fit(x_train,y_train)
line=linear_regressor.coef_*x+linear_regressor.intercept_
plt.scatter(x,y,color='orange')
plt.plot(x,line);
plt.show()
print(x_test)
y_pred=linear_regressor.predict(x_test)
df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df
<jupyter_output><empty_output><jupyter_text>#####*Compare Actual value and Predicted values*<jupyter_code>sns.set_style('whitegrid')
df.plot(kind='bar',figsize=(12,8))<jupyter_output><empty_output><jupyter_text>**Question: What will be predicted score if a student studies for 9.25 hrs/ day?**<jupyter_code>hour=[[9.25]]
pred_score=linear_regressor.predict(hour)
print('Predicted Score :',pred_score[0],'%')<jupyter_output>Predicted Score : 93.69173248737539 %
|
no_license
|
/Task-1 Prediction using Supervised ML/ Task_1.ipynb
|
ShreyasGawande/GRIP-TSF-Data-Science
| 8 |
<jupyter_start><jupyter_text>Low maintanence arsenal for our quest.<jupyter_code>%matplotlib inline
import pandas as pd
import numpy as np
#no warnings because I like my stuff clean.
import warnings
warnings.filterwarnings('ignore')
#pipemaster
from sklearn.pipeline import Pipeline<jupyter_output><empty_output><jupyter_text>Load the training and test data.<jupyter_code>train = pd.read_csv('data/Train.csv')
test = pd.read_csv('data/test.csv')<jupyter_output><empty_output><jupyter_text>***EDA***Size<jupyter_code>train.shape<jupyter_output><empty_output><jupyter_text>What do you have there?<jupyter_code>train.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 401125 entries, 0 to 401124
Data columns (total 53 columns):
SalesID 401125 non-null int64
SalePrice 401125 non-null int64
MachineID 401125 non-null int64
ModelID 401125 non-null int64
datasource 401125 non-null int64
auctioneerID 380989 non-null float64
YearMade 401125 non-null int64
MachineHoursCurrentMeter 142765 non-null float64
UsageBand 69639 non-null object
saledate 401125 non-null object
fiModelDesc 401125 non-null object
fiBaseModel 401125 non-null object
fiSecondaryDesc 263934 non-null object
fiModelSeries 56908 non-null object
fiModelDescriptor 71919 non-null object
ProductSize 190350 non-null object
fiProductClassDesc 401125 non-null object
state 4[...]<jupyter_text>Sad, a lot of these columns have significant amounts of missing data.
We will need to deal with those, one way or another. However I want to create an initial basic model in order to observe the model improvement through feature engineering.
Following code creates a new data frame with the columns that have absolutely none null values. <jupyter_code># get the column counts with applying .count() to each row(axis=0)
column_counts = train.apply(lambda x: x.count(), axis=0)
#decide on which columns to keep(only the columns with maximum column count)
keep_columns = column_counts[column_counts == column_counts.max()]
# create the dataframe
dense_train = train.loc[:,keep_columns.index]
dense_train.shape<jupyter_output><empty_output><jupyter_text>Looks like there are only 13 columns without missing values. eh not bad for early stage.<jupyter_code>dense_train.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 401125 entries, 0 to 401124
Data columns (total 13 columns):
SalesID 401125 non-null int64
SalePrice 401125 non-null int64
MachineID 401125 non-null int64
ModelID 401125 non-null int64
datasource 401125 non-null int64
YearMade 401125 non-null int64
saledate 401125 non-null object
fiModelDesc 401125 non-null object
fiBaseModel 401125 non-null object
fiProductClassDesc 401125 non-null object
state 401125 non-null object
ProductGroup 401125 non-null object
ProductGroupDesc 401125 non-null object
dtypes: int64(6), object(7)
memory usage: 39.8+ MB
<jupyter_text>salesid, machineid and modelid are integers. Notable problem for our model.
Following code changes type of the mentioned columns on the dataset to string.<jupyter_code>id_cols = ['SalesID', 'MachineID', 'ModelID']
dense_train[id_cols] = dense_train[id_cols].astype('str', inplace=True)
print(type(dense_train.SalesID[0]))
print(type(dense_train['MachineID'][0]))
print(type(dense_train.ModelID[0]))
dense_train.describe()<jupyter_output><empty_output><jupyter_text>Minimum value of YearMade is 1000. I don't Tracktors were invented back then. Lets have a closer look.<jupyter_code>dense_train.YearMade.hist();
dense_train.YearMade.value_counts().head()<jupyter_output><empty_output><jupyter_text>Alright we have 38185 tracktors that is made in year 1000 in ourdataset, a significant group of outliers. I assume these are missing values and there are a couple of ways to deal with missing values.
Options:
- if outliers accounts for a small measure of our data, we can simply drop them.
- we can fill them with mode,median, mean, max or min depending on the data.
- we can create an algoritm to predict missing values, which is a complete new project at this stage.
get the ratio:<jupyter_code>(dense_train.YearMade.value_counts() / dense_train.YearMade.count()).head()<jupyter_output><empty_output><jupyter_text>YearMade: 1000 stands for almost 10% of our data. I am not giving up on that.<jupyter_code>dense_train.YearMade.value_counts().hist()<jupyter_output><empty_output><jupyter_text>Lets calculate mean median and mode to see which one makes more sense to replace missing values.<jupyter_code>print(f'mean: {dense_train.YearMade.mean()}')
print(f'mode: {dense_train.YearMade.mode()}')
print(f'median: {dense_train.YearMade.median()}')<jupyter_output>mean: 1899.1569012153318
mode: 0 1000
dtype: int64
median: 1995.0
<jupyter_text>Hmm I think I will go with the mean because it better represents the dataset.
**AHA! Tricked you!** If we simply run analisis on the pure data with given condition we include 38000 missing(wrong) values in our calculations.
Solution: Set a condition before runnin calculations. Lets just considers tracktors that are made after year 1900( I decided to keep the number a little low because who knows, maybe some people buy very old tracktors for collection purposes, weird hobby eh? But who am I to judge...)<jupyter_code>condition = dense_train.YearMade > 1900
mean = dense_train.YearMade[condition].mean()
mode = dense_train.YearMade[condition].mode()
median = dense_train.YearMade[condition].median()
print(f'mean: {mean}')
print(f'mode: {mode}')
print(f'median: {median}')<jupyter_output>mean: 1993.7574034275638
mode: 0 1998
dtype: int64
median: 1996.0
<jupyter_text>Alright all these years are close to each other so I will simply go with the median which in between max and min.<jupyter_code>dense_train.loc[~condition, 'YearMade'] = int(median)
dense_train.YearMade.value_counts().head()<jupyter_output><empty_output><jupyter_text>## Dependent Variable(target - y)
Lets have a look at this one.<jupyter_code>dense_train.SalePrice.hist()<jupyter_output><empty_output><jupyter_text>This picture is a little bit surprising, it looks more like an exponential distribution than anything else. We might expect a more normal distribution for any given vehicle, but evidently our data set is so full of different types of vehicles that we get this shape. There are also apparently a lot of low-priced vehicles.
Since our RMSLE error metric penalizes underpredictions, we'll want to make sure that we do well on the right tail of this distribution and don't just focus on the common vehicles.### Basic Model
We've taken care of some outlier values in the model year, but let's step back and talk about why.
Since we're talking about used equipment here, there are a few things that I believe matter to the price. If I were to rank them I would list them from most to least important as:
based on common sense
- What is a reasonable price for this vehicle? We don't have any columns in our data that tell us this, but the idea is that there are probably very different prices paid for more or less expensive pieces of equipment. If I knew what original cost of the vehicles was, this, along with the age would probably help me. As it is, I can probably get a estimate of the price by taking an average of similar vehicles sold recently. What the best notion of similar should be is unclear... so we may want to experiement in this regard.
- The age of the vehicle. Surely older vehicles cost less than newer ones right? We have the model year, but we'll need to compute the age on the date of sale for best results.
- How heavily used the vehicle is. Surely more heavily used vehicles are less expensive. Unfortunately there are a lot of missing values in the machinehoursusage. Let's investigate.
As we discover things that are relevant we will include them in our more rigorous model file in heavy_equipment_model.py. That's where will ultimately do cross validation and use pipelining to assure that our model works on unseen data.**machine hours**<jupyter_code>mhcm = train.MachineHoursCurrentMeter
print((mhcm == 0).sum(), ' zero values')
print(mhcm.isnull().sum(), ' missing values')<jupyter_output>73126 zero values
258360 missing values
<jupyter_text>So in addition to the missing values, there are a lot of values set at zero, which probably represent missing vals too. Lets have a closer look.<jupyter_code>mhcm.hist()<jupyter_output><empty_output><jupyter_text>This kind of worthless histogram can happen when there are outliers.
Solution: We know there are also a lot of zeros, so let's clean those out and try to get a sense of the shape in a reasonably dense range.<jupyter_code>mhcm[(mhcm>0) & (mhcm<50000)].hist()<jupyter_output><empty_output><jupyter_text>Ok this sort of worked, but it looks like we have a really long tail of outliers, which is going to be cause for concern. How can we effectively learn the behavior of our model when the data takes on extreme values but only infrequently?<jupyter_code>(mhcm > 100000).sum()
mhcm.plot(kind='box')<jupyter_output><empty_output><jupyter_text>
It's not obvious these things are outliers in the sense that they're just awkward data, it seems possible there are some really heavily used machines in the auction occassionally. We could consider ignoring these things, there are only a few hundred, but we'll certainly want to keep these in mind as we proceed, who knows maybe they speak a different language.
For now, let's not focus on the hours, this data is looking very difficult, so let's try to build an initial model that doesn't include it.
For equipment age, we can calculate this by taking the difference between the saledate and the year made.<jupyter_code>dense_train['saledate_converted'] = pd.to_datetime(dense_train.saledate)
dense_train.saledate[0] #old date version
dense_train.saledate_converted[0] #converted date version<jupyter_output><empty_output><jupyter_text>We are subtracking yearmade from saledate in order to find the age of the vehicle at the time of sale.<jupyter_code>dense_train['equipment_age'] = dense_train.saledate_converted.dt.year - dense_train.YearMade
dense_train.equipment_age.hist()<jupyter_output><empty_output><jupyter_text>Ok, this graph looks pretty reasonable, but maybe a few outliers on both sides:
I dont't think equipment age or anything's age can be -12. So lets fix that.
<jupyter_code>dense_train.equipment_age.describe()
age = dense_train.equipment_age
(age<0).sum(), (age > 50).sum()<jupyter_output><empty_output><jupyter_text>There are only 196 vehicles that are older than 50 years old but the actual problem is with the vehicles that are aged somehow less than 0.Let's see what our variables look like here.<jupyter_code>pd.plotting.scatter_matrix(dense_train[['SalePrice', 'equipment_age']]);<jupyter_output><empty_output><jupyter_text>Based on the graph above, it looks like there's a bit of a collection of outliers at around 60 years old. We might be able to filter those out, or want to build another model on them.
Ok, now we return to 1. above. We want to an average price of similar vehicle. There are many ways to think about similarity including:
- Geography (look at nearby sales)
- Time (look at recent sales)
- Vehicle (look at sales that are most similar in terms of vehicle)
- Similar usage
- Similar age etc.
Ultimately choosing what similarity metric we should use is quite difficult, so for now let's try to do something easy and reasonable. What about the five most recent sales of the same modelid?
If there aren't 5 such sales of course we'll have to do something else, but let's see how big that problem is before we solve it.<jupyter_code>#Set multi-level index and sort by modelid and saledate, so we can use window functions.
dense_train.sort_values(by='saledate_converted', inplace=True)
dense_train.head()
dense_train.tail()
m = dense_train.groupby('ModelID')['SalePrice'].apply(lambda x: x.rolling(5).agg([np.mean]))
m.tail()
from datetime import timedelta
timedelta(1)
z = pd.concat([m, dense_train[['saledate_converted', 'ModelID', 'SalesID']]], axis=1)
z['saledate_converted'] = z.saledate_converted + timedelta(1)
#Some days will have more than 1 transaction for a particular model, take the last mean (which has most info)
z = z.groupby(['ModelID', 'saledate_converted']).apply(lambda x: x.tail(1))
z.head()
z.head().columns<jupyter_output><empty_output><jupyter_text>
If we're going to use this as an input, we don't want to use an average value that could include the price of the vehicle we're trying to predict. So we'll want to shift the dates so that we can merge on the average we would have calculated the day before observing the transaction.
This is a little tricky, we need to find the closest date that is less than the transaction date and for which we have an average.
We're going to merge based on modelid and date then use fill forward to get the nearest value for any transaction (there will still be missing values).<jupyter_code>near_price = pd.merge(z.drop('SalesID', axis=1), dense_train, how='outer',
on=['ModelID', 'saledate_converted'])
near_price = near_price.set_index(['ModelID', 'saledate_converted']).sort_index()
g = near_price['mean'].groupby(level=0)
near_price['filled_mean_price'] = g.transform(lambda x: x.fillna(method='ffill'))<jupyter_output><empty_output><jupyter_text>Ok, in the dataset above, the observations missing a SalesID come only from our averages table and can be dropped.<jupyter_code>print(near_price.shape)
near_price.head(2)
near_price = near_price[near_price.SalesID.notnull()]
near_price.shape<jupyter_output><empty_output><jupyter_text>
Ok, we're getting there. Some of our transactions in this data set will not have a 'filled mean price' because there were no previously observed transactions. Fortunately there aren't too many such observations (though this is all within the training data, so it's likely that in reality we might encounter more such cases). It's possible we could build a separate model for predicing these transactions.<jupyter_code>near_price.filled_mean_price.describe()<jupyter_output><empty_output><jupyter_text>What was the point of all this recent mean nonsense? Well, it was my belief that the recent mean would have predictive power of the saleprice. Was it worth it? The histogram of mean price - sale price should be instructive:<jupyter_code>(near_price.filled_mean_price).hist()
pd.plotting.scatter_matrix(near_price[['SalePrice', 'filled_mean_price', 'equipment_age']]);
def percentile_plots(x, y):
cuts = pd.cut(x, 20)
y.groupby(cuts).mean().plot()
percentile_plots(near_price.filled_mean_price, near_price.SalePrice)
percentile_plots(near_price.YearMade, near_price.SalePrice)<jupyter_output><empty_output><jupyter_text>So the mean price and sale price are definitely correlated here, it should certainly be helpful.
Ok, so we are now ready to build our first model.
I've been building in this functionality into heavy_equipment_model.py, which you should take a look at now. This uses the same ideas as above, but implements it using the Pipeline class, which is (extremely, very) helpful for preventing target leakage.
One thing we'll want to do now is set up a cross validation set. Our heldout set is future transactions, so we're going to want to choose a cv set using time as well. Let's see how we might do that.<jupyter_code>dense_train.saledate_converted.hist()<jupyter_output><empty_output><jupyter_text>Ok, looks like we have more transactions recently, and we don't want to make our lives too hard by choosing a validation set that includes all of our most useful data.<jupyter_code>dense_train.saledate_converted.quantile([.7, .8, .9, 1])<jupyter_output><empty_output><jupyter_text>Based on this, I would say that we probably don't want to use the usual 70/30 split, because it looks like maybe there was a change in our data collection around 2009. We've got a little less than 10% of our data represented by 2011, which isn't a huge cv data set, but I think makes sense under the circumstances. If anything, it's on the small side, so it might be reasonable to consider other datasets.<jupyter_code>(near_price.filled_mean_price-near_price.SalePrice).hist()<jupyter_output><empty_output>
|
no_license
|
/regression_tracktor_UJ.ipynb
|
u-jan/Used-Tracktor-Price-Prediction
| 30 |
<jupyter_start><jupyter_text>保存和加载用于推理或恢复训练的通用检查点模型有助于从上次停止的地方开始。保存一般检查点时,您必须保存的不仅仅是模型的 state_dict。保存优化器的 state_dict 也很重要,因为它包含在模型训练时更新的缓冲区和参数。您可能想要保存的其他项目包括您离开的 epoch、最新记录的训练损失、外部 torch.nn.Embedding 层等,基于您自己的算法。
# 介绍
要保存多个检查点,您必须将它们组织在字典中并使用 torch.save() 序列化字典。一个常见的 PyTorch 约定是使用 .tar 文件扩展名保存这些检查点。要加载项目,首先初始化模型和优化器,然后使用 torch.load() 在本地加载字典。从这里,您可以通过简单地按预期查询字典来轻松访问保存的项目。
在这个秘籍中,我们将探索如何保存和加载多个检查点。# 步骤
* 导入所有必要的库以加载我们的数据
* 定义并初始化神经网络
* 初始化优化器
* 保存一般检查点
* 加载通用检查点1. 导入必要的库来加载我们的数据
对于这个秘籍,我们将使用 torch 及其子模块 torch.nn 和 torch.optim。<jupyter_code>import torch
import torch.nn as nn
import torch.optim as optim<jupyter_output>time: 690 ms (started: 2021-09-05 13:50:05 +08:00)
<jupyter_text>2. 定义并初始化神经网络
例如,我们将创建一个用于训练图像的神经网络。 要了解更多信息,请参阅定义神经网络食谱。<jupyter_code>class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
print(net)<jupyter_output>Net(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
time: 29.4 ms (started: 2021-09-05 13:50:30 +08:00)
<jupyter_text>3. 初始化优化器
我们将使用动量的 SGD。<jupyter_code>optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)<jupyter_output>time: 1.13 ms (started: 2021-09-05 13:50:59 +08:00)
<jupyter_text>4. 保存一般检查点
收集所有相关信息并构建您的字典。<jupyter_code># Additional information
EPOCH = 5
PATH = "model.pt"
LOSS = 0.4
torch.save({
'epoch': EPOCH,
'model_state_dict': net.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': LOSS,
}, PATH)<jupyter_output>time: 4.72 ms (started: 2021-09-05 13:53:11 +08:00)
<jupyter_text>5. 加载通用检查点
记得先初始化模型和优化器,然后在本地加载字典。<jupyter_code>model = Net()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.eval()
# - or -
model.train()<jupyter_output><empty_output>
|
no_license
|
/Pytorch_recipes/在 PYTORCH 中保存和加载一个通用检查点.ipynb
|
ustchope/pytorch_study
| 5 |
<jupyter_start><jupyter_text># Stock Price Prediction with Deep Learning(LSTM)
<jupyter_code>import pandas_datareader as webreader
import math
import numpy as np
import pandas as pd
from datetime import date, timedelta
from pandas.plotting import register_matplotlib_converters
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from sklearn.metrics import mean_absolute_error, mean_squared_error
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import LSTM, Dense
from IPython.display import display , Markdown<jupyter_output><empty_output><jupyter_text># Loading the data<jupyter_code># Setting the timeframe for the data extraction
today = date.today()
date_today = today.strftime("%Y-%m-%d")
date_start = '2010-01-01'
# Getting S&P500 quotes
stockname = 'S&P500'
symbol = '^GSPC'
df = webreader.DataReader(
symbol, start=date_start, end=date_today, data_source="yahoo"
)
# Taking a look at the shape of the dataset
print("Number of Rows in the dataset is:",df.shape[0])
print("Number of Columns in the dataset is:",df.shape[1])
display(Markdown('---'))
display(Markdown('##*The starting of the data looks like ...*'))
display(df.head(5))
display(Markdown('---'))
display(Markdown('##*The ending of the data looks like ...*'))
display(df.tail())<jupyter_output>Number of Rows in the dataset is: 2857
Number of Columns in the dataset is: 6
<jupyter_text># Exploring the data<jupyter_code># Plotting the data
register_matplotlib_converters()
years = mdates.YearLocator()
fig, ax1 = plt.subplots(figsize=(16, 6))
ax1.xaxis.set_major_locator(years)
x = df.index
y = df['Close']
ax1.fill_between(x, 0, y, color='#b9e1fa')
ax1.legend([stockname], fontsize=12)
plt.title(stockname + ' from '+ date_start + ' to ' + date_today, fontsize=16)
plt.plot(y, color='#039dfc', label=stockname, linewidth=1.0)
plt.ylabel('S&P500 Points', fontsize=12)
plt.show()<jupyter_output><empty_output><jupyter_text># Preprocessing<jupyter_code># Create a new dataframe with only the Close column and convert to numpy array
data = df.filter(['Close'])
npdataset = data.values
# Get the number of rows to train the model on 80% of the data
training_data_length = math.ceil(len(npdataset) * 0.8)
# Transform features by scaling each feature to a range between 0 and 1
mmscaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = mmscaler.fit_transform(npdataset)
scaled_data
# Plotting the data
register_matplotlib_converters()
years = mdates.YearLocator()
fig, ax1 = plt.subplots(figsize=(16, 6))
ax1.xaxis.set_major_locator(years)
x = df.index
yt = pd.DataFrame(index=df.index,data=scaled_data.flatten(),columns=['scaled'])#df['Close']
y = yt['scaled']
ax1.fill_between(x, 0, y, color='#b9e1fa')
ax1.legend([stockname], fontsize=12)
plt.title(stockname + ' from '+ date_start + ' to ' + date_today, fontsize=16)
plt.plot(y, color='#039dfc', label=stockname, linewidth=1.0)
plt.ylabel('S&P500 Points', fontsize=12)
plt.show()
# Create a scaled training data set
train_data = scaled_data[0:training_data_length, :]
# Split the data into x_train and y_train data sets
x_train = []
y_train = []
trainingdatasize = len(train_data)
for i in range(100, trainingdatasize):
x_train.append(train_data[i-100: i, 0]) #contains 100 values 0-100
y_train.append(train_data[i, 0]) #contains all other values
# Convert the x_train and y_train to numpy arrays
x_train = np.array(x_train)
y_train = np.array(y_train)
# Reshape the data
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
#print(x_train.shape)
#print(y_train.shape)
#print(x_train[0][1][0], y_train[1])
#Plotting the training and testing split
display(Markdown('##*The Data is divided into training and testing subparts*'))
labels = ['Training Data', 'Testing Data']
training_data_size_viz = training_data_length/df.shape[0]
sizes = [training_data_size_viz*100,(1-training_data_size_viz)*100]
explode = (0, 0.1) # only "explode" the 2nd slice (i.e. 'Hogs')
fig1, ax1 = plt.subplots(figsize=(8,8))
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()<jupyter_output><empty_output><jupyter_text># Model Training<jupyter_code># Configure the neural network model
model = Sequential()
# Model with 100 Neurons
# inputshape = 100 Timestamps
model.add(LSTM(100, return_sequences=True, input_shape=(x_train.shape[1], 1)))
model.add(LSTM(100, return_sequences=False))
model.add(Dense(25, activation='relu'))
model.add(Dense(1))
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Training the model
history = model.fit(x_train, y_train, batch_size=16, epochs=25,validation_split=0.2)
display(Markdown('##Loss is reducing as the system is learning'))
register_matplotlib_converters()
years = mdates.YearLocator()
fig, ax1 = plt.subplots(figsize=(24, 8))
ax1.xaxis.set_major_locator(years)
x = np.linspace(0,24,25)
y = history.history['loss']
ax1.fill_between(x, 0, y, color='#fab9c1')
ax1.legend([stockname], fontsize=12)
plt.title("Loss is reducing as the system is learning")
plt.plot(y, color='#710815', label="Loss", linewidth=1.0)
plt.ylabel('Loss', fontsize=12)
plt.show()
# Create a new array containing scaled test values
test_data = scaled_data[training_data_length - 100:, :]
# Create the data sets x_test and y_test
x_test = []
y_test = npdataset[training_data_length:, :]
for i in range(100, len(test_data)):
x_test.append(test_data[i-100:i, 0])
# Convert the data to a numpy array
x_test = np.array(x_test)
# Reshape the data, so that we get an array with multiple test datasets
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
# Get the predicted values
predictions = model.predict(x_test)
predictions = mmscaler.inverse_transform(predictions)<jupyter_output><empty_output><jupyter_text># Evaluate model performance<jupyter_code># Calculate the mean absolute error (MAE)
mae = mean_absolute_error(predictions, y_test)
display(Markdown('###The Evaluation of the Models Performance'))
display(Markdown('>-The Mean Absolute Error is : ' + str(round(mae, 1))))
# Calculate the root mean squarred error (RMSE)
rmse = np.sqrt(np.mean(predictions - y_test)**2)
display(Markdown('>-The Root Mean Squared Error: ' + str(round(rmse, 1))))
display(Markdown('---'))
display(Markdown('<br/>'))
#Plotting performance
fig, ax = plt.subplots(figsize=(16,6))
# Example data
performance = ('MAE', 'RMSE')
y_pos = np.arange(len(performance))
measure = list()
measure.append(mae)
measure.append(rmse)
ax.barh(y_pos, measure, align='center',color='#4b60f2')
ax.set_yticks(y_pos)
ax.set_yticklabels(performance)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Performance')
ax.set_title('Performance of the systems')
plt.show()
# The date from which on the date is displayed
display_start_date = "2018-01-01"
# Add the difference between the valid and predicted prices
train = data[:training_data_length + 1]
valid = data[training_data_length:]
valid.insert(1, "Predictions", predictions, True)
valid.insert(1, "Difference", valid["Predictions"] - valid["Close"], True)
# Zoom in to a closer timeframe
valid = valid[valid.index > display_start_date]
train = train[train.index > display_start_date]
# Visualize the data
fig, ax1 = plt.subplots(figsize=(22, 10), sharex=True)
xt = train.index; yt = train[["Close"]]
xv = valid.index; yv = valid[["Close", "Predictions"]]
plt.title("Predictions vs Ground Truth", fontsize=20)
plt.ylabel(stockname, fontsize=18)
plt.plot(yt, color="#039dfc", linewidth=2.0)
plt.plot(yv["Predictions"], color="#E91D9E", linewidth=1.0)
plt.plot(yv["Close"], color="#833809", linewidth=1.0)
plt.legend(["Train", "Test Predictions", "Ground Truth"], loc="upper left")
# Fill between plotlines
ax1.fill_between(xt, 0, yt["Close"], color="#b9e1fa")
ax1.fill_between(xv, 0, yv["Predictions"], color="#fad2b9")
ax1.fill_between(xv, yv["Close"], yv["Predictions"], color="black")
# Create the bar plot with the differences
x = valid.index
y = valid["Difference"]
plt.bar(x, y, width=5, color="black")
plt.grid()
plt.show()
# Show the valid and predicted prices
dif = valid['Close'] - valid['Predictions']
#valid.insert(2, 'Difference', dif, True)
valid.tail(5)
#Plotting the difference
fig, ax1 = plt.subplots(figsize=(16, 6))
width = 0.6
highPower = list(valid.tail(7)['Close'])
lowPower = list(valid.tail(7)['Predictions'])
indices = np.arange(len(highPower))
plt.bar(indices, highPower, width=width,
color='#0b6096', label='Actual Value')
plt.bar(indices, lowPower,
width=0.9*width, color='#ee1530', alpha=0.6, label='Predicted Value')
plt.xticks(indices+width/2,
list(valid.tail(7).index) )
plt.legend()
plt.show()
<jupyter_output><empty_output><jupyter_text># Predict next day's price<jupyter_code># Get fresh data until today and create a new dataframe with only the price data
price_quote = webreader.DataReader(symbol, data_source='yahoo', start=date_start, end=date_today)
new_df = price_quote.filter(['Close'])
# Get the last 100 day closing price values and scale the data to be values between 0 and 1
last_100_days = new_df[-100:].values
last_100_days_scaled = mmscaler.transform(new_df[-100:].values)
# Create an empty list and Append past 100 days
X_test = []
X_test.append(last_100_days_scaled)
# Convert the X_test data set to a numpy array and reshape the data
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
# Get the predicted scaled price, undo the scaling and output the predictions
pred_price = model.predict(X_test)
pred_price = mmscaler.inverse_transform(pred_price)
date_tomorrow = date.today() + timedelta(days=1)
display(Markdown('## The price for ' + stockname + ' at ' + date_today + ' closing time was: ' + '*' + str(round(df.at[df.index.max(), 'Close'])) + '*'))
display(Markdown('## The predicted ' + stockname + ' price at date ' + str(date_tomorrow) + ' at closing will be: ' + '*' + str(round(pred_price[0, 0], 0)) + '*'))
# Get fresh data until today and create a new dataframe with only the price data
price_quote = webreader.DataReader(symbol, data_source='yahoo', start=date_start, end=date_today)
new_df = price_quote.filter(['Open'])
# Get the last 100 day closing price values and scale the data to be values between 0 and 1
last_100_days = new_df[-100:].values
last_100_days_scaled = mmscaler.transform(new_df[-100:].values)
# Create an empty list and Append past 100 days
X_test = []
X_test.append(last_100_days_scaled)
# Convert the X_test data set to a numpy array and reshape the data
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
# Get the predicted scaled price, undo the scaling and output the predictions
pred_price = model.predict(X_test)
pred_price = mmscaler.inverse_transform(pred_price)
date_tomorrow = date.today() + timedelta(days=1)
display(Markdown('## The price for ' + stockname + ' at ' + date_today + ' opening time was: ' + '*' + str(round(df.at[df.index.max(), 'Open'])) + '*'))
display(Markdown('## The predicted ' + stockname + ' price at date ' + str(date_tomorrow) + ' at opening will be: ' + '*' + str(round(pred_price[0, 0], 0)) + '*'))<jupyter_output><empty_output>
|
no_license
|
/Stock_Market_Prediction (LSTM).ipynb
|
Rittikasur/Stock-Prediction-using-web-scrapping
| 7 |
<jupyter_start><jupyter_text>#Breaking out of two for loops in Python
We want to search a list of lists for a specific value using nested loops. When we find the first occurrence of the value we want to break out of both loops.
The following code doesn't work because we only break out of the inner loop and so continue to find the second occurrence of the value.<jupyter_code>outer = [
[1, 2, 3],
[1, 888, 3],
[1, 2, 888],
[1, 2, 3]
]
for inner in outer:
for value in inner:
if value == 888:
print('found it')
break<jupyter_output>found it
found it
<jupyter_text>The following code does what we want, althought the syntax isn't the clearest. The key to understanding it is knowing that an else clause in a for loop runs only when its corresponding for loop has run all the way through without a break statement being invoked. <jupyter_code>outer = [
[1, 2, 3],
[1, 888, 3],
[1, 2, 888],
[1, 2, 3]
]
for inner in outer:
for value in inner:
if value == 888:
print('found it')
break
else:
continue
break<jupyter_output>found it
found it
<jupyter_text>In this simplified example we expect the else clause to run since we don't break out of the for loop.<jupyter_code>for i in range(10):
print(i)
else:
print('Else clause')<jupyter_output>0
1
2
3
4
5
6
7
8
9
Else clause
<jupyter_text>In this example, however, we will break out of the for loop when i == 5 and so we should not expect the else clause to run.<jupyter_code>for i in range(10):
print(i)
if i == 5:
break
else:
print('Else clause')<jupyter_output>0
1
2
3
4
5
|
no_license
|
/Breaking_out_of_two_for_loops_in_Python.ipynb
|
retrosnob/Azure-Notebooks
| 4 |
<jupyter_start><jupyter_text># TAKEN<jupyter_code>%matplotlib inline
import sys
BIN = '../'
sys.path.append(BIN)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
#import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
from torch.autograd import Variable
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader
# import my_matplotlib_style as ms
# from fastai import data_block, basic_train, basic_data
# from fastai.callbacks import ActivationStats
# import fastai
import matplotlib as mpl
# mpl.rc_file(BIN + 'my_matplotlib_rcparams')
import torch.nn as nn
# import torchvision.transforms as transforms
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = torch.device("cpu")
print(device)
import os
os.chdir('/home/black/Desktop/tCERN/HEPAutoencoders')
from nn_utils import AE_big, AE_3D_200
from utils import plot_activations
# print(torch.cuda.is_available())
# fastai.torch_core.defaults.device = 'cpu'
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
!ls ../self_data
# Load data
train = pd.read_pickle('../self_data/all_jets_train_4D_100_percent.pkl')
test = pd.read_pickle('../self_data/all_jets_test_4D_100_percent.pkl')
n_features = len(train.loc[0])
train.head(10)
# Normalize
train_mean = train.mean()
train_std = train.std()
train = (train - train_mean) / train_std
test = (test - train_mean) / train_std
# train = train.float()
train_x = train
test_x = test
train_y = train_x # y = x since we are building and AE
test_y = test_x
train_ds = TensorDataset(torch.tensor(train_x.values), torch.tensor(train_y.values))
valid_ds = TensorDataset(torch.tensor(test_x.values), torch.tensor(test_y.values))
print(train_x.shape, train_y.shape)
print(test_x.shape, test_y.shape)
batch_size = 1024
train_loader = torch.utils.data.DataLoader(dataset=train_ds, batch_size=batch_size, shuffle=True)
valid_loader = torch.utils.data.DataLoader(dataset=valid_ds, batch_size=batch_size, shuffle=True)
class RMSELoss(torch.nn.Module):
def __init__(self):
super(RMSELoss,self).__init__()
def forward(self,x,y):
criterion = nn.MSELoss()
loss = torch.sqrt(criterion(x, y))
return loss
loss_func = nn.MSELoss()
#loss_func = RMSELoss()
#loss_func = my_loss_func<jupyter_output><empty_output><jupyter_text># Training<jupyter_code>import time
from tqdm import tqdm, trange, tqdm_notebook
from sklearn.metrics import f1_score, accuracy_score
learning_rate = 1e-5
num_epochs = 10
model = AE_3D_200().float().to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
iterr_count = 0
epoch_count = 0
iter_list = []
train_loss_list = []
train_acc_list = []
valid_loss_list = []
valid_acc_list = []
inspect_size = 20
n_iters = int((len(train) / batch_size) * num_epochs)
n_inspects = int(len(train) / (batch_size*inspect_size))
np.set_printoptions(suppress=True)
torch.set_printoptions(sci_mode=False, precision=4)
# print("Batch Size : ", batch_size)
# print("Learning Rate : ", learning_rate)
# print("Number of Epochs : ", num_epochs)
# print("Iterations/Epoch : ", len(train_loader), len(valid_loader))
# print("Inspects/Epoch : ", n_inspects)
# inspect_size = 80
# num_epochs = 10
monitor_iter = True
monitor_train = True
monitor_train_preds = False
eval_length = 5
monitor_valid = True
monitor_valid_preds = False
save_bset = False
learning_rate = 1e-7
num_epochs = 20
inspect_size = 20
time_start = time.time()
soft = torch.nn.Softmax(dim=1)
train_labels = np.array([])
train_preds = np.array([])
len_train_batches = len(train_loader)
len_valid_batches = len(valid_loader)
for epoch in range(num_epochs):
epoch_count += 1
print("===============================================================================================================")
# print("### Epoch : {:2d}/{:2d} ###".format(epoch_count, num_epochs))
it_train_loader = iter(train_loader)
model.train()
for iteration in tqdm_notebook(range(len_train_batches)):
iterr_count += 1
images, labels = next(it_train_loader)
images = images.type(torch.float).to(device)
labels = labels.type(torch.float).to(device)
optimizer.zero_grad()
preds = model(images)
loss = criterion(preds, labels)
loss.backward()
optimizer.step()
if(monitor_train):
preds = preds.detach().cpu().numpy()
# preds = np.argmax(preds, axis=1)
labels = labels.cpu().numpy()
train_preds = np.append(train_preds, preds)
train_labels = np.append(train_labels, labels)
# ---------------------------------------------------------------------------------------------
####################################### Validation #######################################
if(iteration % inspect_size != 0): continue
if(monitor_iter):
print("[{:2d}/{:2d}] Iteration: {:3d} [{:3.0f}%]".format(epoch_count, num_epochs, iteration, (iteration/len_train_batches) * 100, iteration))
print("------------------------")
iter_list.append(iterr_count)
if(save_bset): print(" - BValid f1: {:0.4f}".format(best_yet_f1))
if(monitor_train):
# train_f1 = f1_score(train_labels, train_preds, average='micro')
# train_acc = accuracy_score(train_labels, train_preds)
train_loss = loss.item()
train_loss_list.append(train_loss)
# train_acc_list.append(train_acc)
print("[Train] Loss: {:0.4f}".format(train_loss))
if(not monitor_train_preds):
train_labels = np.array([])
train_preds = np.array([])
if(monitor_valid):
model.eval()
valid_labels = np.array([])
valid_preds = np.array([])
it_valid_loader = iter(valid_loader)
for iteration in range(len_valid_batches):
with torch.no_grad():
images, labels = next(it_valid_loader)
images = images.type(torch.float).to(device)
labels = labels.to(device)
preds = model(images)
loss = criterion(preds, labels)
preds = preds.detach().cpu().numpy()
# preds = np.argmax(preds, axis=1)
labels = labels.to('cpu').numpy()
valid_labels = np.append(valid_labels, labels)
valid_preds = np.append(valid_preds, preds)
if(eval_length != None and iteration == eval_length):
break
# valid_f1 = f1_score(valid_labels, valid_preds, average='micro')
# valid_acc = accuracy_score(valid_labels, valid_preds)
valid_loss = loss.item()
valid_loss_list.append(valid_loss)
# valid_acc_list.append(valid_acc)
print("[Valid] Loss: {:0.4f}".format(valid_loss))
if(save_bset and valid_f1 > best_yet_f1):
best_yet_f1 = valid_f1
torch.save(model.state_dict(), "best_yet.pt")
print("########### Saved best model ###########")
if(monitor_train_preds):
print("Train Target : ", train_labels[:24])
print("Train Preds : ", train_preds[:24])
train_labels = np.array([])
train_preds = np.array([])
if(monitor_valid_preds):
print("Valid Target : ", valid_labels[:24])
print("Valid Preds : ", valid_preds[:24])
print("==========================================")
time_end = time.time()
print("Time taken : ", time_end-time_start)
'''GENERATING AVERAGE INSPECTION LISTS'''
os.makedirs('Graphs', exist_ok=True)
# Defining roll function
def make_roll(input_list, roll_size=5):
output_list = []
for i in range(len(input_list)):
if i==0:
output_list.append(input_list[0])
elif i<roll_size:
output_list.append(np.mean(input_list[:i+1]))
else:
output_list.append(np.mean(input_list[i-roll_size:i]))
return output_list
# Generating roll lists
train_roll_loss_list = make_roll(train_loss_list, roll_size=5)
train_roll_acc_list = make_roll(train_acc_list, roll_size=5)
valid_roll_acc_list = make_roll(valid_acc_list, roll_size=5)
valid_roll_loss_list = make_roll(valid_loss_list, roll_size=5)
'''PLOTTTING THE LOSS GRAPH'''
plt.figure(figsize=[15,10])
plt.plot(train_loss_list, '-', lw=1, c='salmon', label='Train Loss')
plt.plot(train_roll_loss_list, '-|r', lw=3, label='Train Loss [Avg]')
plt.plot(valid_loss_list, '-', lw=1, c='brown', label='Valid Loss')
plt.plot(valid_roll_loss_list, '-|k', lw=3, label='Valid Loss [Avg]')
plt.title('model-{} : LOSS vs ITERATIONS'.format(-1))
plt.xlabel('Number of Iterations')
plt.ylabel('Loss')
plt.grid(True, linestyle='-.',)
plt.tick_params()
plt.legend()
# plt.savefig("Graphs/model-{}_epochs={}_loss.png".format(params['model_no'], params['num_epochs_trained']), dpi=100)
plt.show()<jupyter_output><empty_output>
|
no_license
|
/model3_custom_pytorch/.ipynb_checkpoints/train3-checkpoint.ipynb
|
ATLAS-Autoencoders-GSoC/gsoc-assignment-adiah80
| 2 |
<jupyter_start><jupyter_text>###### 合并excel<jupyter_code>excel_names = []
for excel_name in os.listdir(split_dir):
excel_names.append(excel_name)
excel_names
df_lists = []
for i in range(len(excel_names)):
df_lists.append(pd.read_excel(f'{split_dir}/{excel_names[i]}'))
df_lists
df_concat = pd.concat(df_lists,ignore_index=False)
df_concat.count()
df_concat.to_excel(f'{work_dir}/crazyant_blog_articles_concat.xlsx',index=False)<jupyter_output><empty_output>
|
no_license
|
/Pandas/Pandas拆分合并Excel.ipynb
|
Charswang/Machine-Learning-Datas
| 1 |
<jupyter_start><jupyter_text># Theory Basics: the "Quantum" Behind Quantum Computing
This notebook is for beginners! :D (Those who are interested in quantum computing but have not taken an college level quantum mechanics course.) I'll be real with you, this notebook dives into the **basic concepts** of quantum computing and not much application. Application will build off what you've learned here.
Before we start, here are some prereqs that really are prerequisite to understanding the quantum behind quantum computing.
**Prerequisites:**
* familiar with python
* familiar with matrices (linear algebra is the language of quantum computing)
* familiar with jupyter notebooks
* familiar with binary
* [prevision notebook](Entanglement.ipynb)
If you meet these prereqs, great! :D If not, I doubt this notebook will be helpful to you. I **strongly** encourage you to learn those topics before going on. I'd suggest Khan academy for linear algebra—particularly eigenvectors and eigenvalues.
Let's dive in!
### Qiskit
To get started, you'll need to [install qiskit](https://qiskit.org/documentation/install.html#install). Qiskit is IBM's open source quantum computing python library. It has everything you need to make your own algorithms, execute well known algorithms, and run them on simulated or real quantum computers. There are other open source quantum computing libraries out there, but we'll just use qiskit for now. If you want to learn more about qiskit **definitely** go through their "getting started" [tutorials](https://github.com/Qiskit/qiskit-iqx-tutorials/blob/master/qiskit/1_start_here.ipynb).<jupyter_code># This may take a few seconds
import numpy as np
import pandas as pd
from qiskit import *
import matplotlib.pyplot as plt<jupyter_output><empty_output>
|
no_license
|
/Information Encoding.ipynb
|
Applied-Quantum-Computing/theory-basics
| 1 |
<jupyter_start><jupyter_text>### OSM-overpass服务接口使用,在线查询[OpenStreetMap](http:www.openstreetmap.org)开放空间数据库。
**_ by [[email protected]](http://my.oschina.net/u/2306127/blog), 2016-04-23. _**
>#### overpy-使用overpass api接口的python library,这里将返回结果集保存为JSON格式。
* 安装:$ pip install overpy
* 文档:http://python-overpy.readthedocs.org/en/latest/example.html#basic-example
* 接口:http://wiki.openstreetmap.org/wiki/Overpass_API
本工具例程基于上述文档例程进行编写。 <jupyter_code>import os, sys, gc
import time
import json
import overpy
from pprint import *<jupyter_output><empty_output><jupyter_text>### 调用overpass接口,获取result数据结果集。
* 由于通过网络返回,容易中断,而且是在内存中处理,不适合创建大的查询集。<jupyter_code>#范围:纬度1,经度1,纬度2,经度2
#返回:result
def get_osm():
query = "[out:json];node(50.745,7.17,50.75,7.18);out;"
osm_op_api = overpy.Overpass()
result = osm_op_api.query(query)
print("Nodes: ",len(result.nodes))
print("Ways: ",len(result.ways))
print("Relations: ",len(result.relations))
return result<jupyter_output><empty_output><jupyter_text>### 在线获取osm数据.<jupyter_code>result = get_osm()<jupyter_output>Nodes: 2267
Ways: 0
Relations: 0
<jupyter_text>#### 显示node的属性信息(仅显示前3个node的信息)。<jupyter_code>nodeset = result.nodes[0:3]
pprint(nodeset)<jupyter_output>[<overpy.Node id=50878400 lat=50.7461788 lon=7.1742257>,
<overpy.Node id=50878401 lat=50.7476027 lon=7.1744795>,
<overpy.Node id=100792806 lat=50.7486483 lon=7.1714704>]
<jupyter_text>#### 遍历node的子集,该子集由上一步产生。<jupyter_code>for n in nodeset:
print(n.id,n.lat,n.lon)<jupyter_output>50878400 50.7461788 7.1742257
50878401 50.7476027 7.1744795
100792806 50.7486483 7.1714704
<jupyter_text>### 将查询到的数据集合转换为json格式,写入json格式的文件.
(_ 该格式可由Spark直接载入: SQLContext.read.json()_ )。<jupyter_code>def node2json(node):
jsonNode="{\"id\":\"%s\", \"lat\":\"%s\", \"lon\":\"%s\"}"%(node.id,node.lat,node.lon)
return jsonNode
def node2jsonfile(fname,nodeset):
fnode = open(fname,"w+")
for n in nodeset:
jn = node2json(n) + "\n"
fnode.write(jn)
fnode.close()
print("Nodes:",len(nodeset),", Write to: ",fname)<jupyter_output><empty_output><jupyter_text>#### 执行json文件保存操作。<jupyter_code>node2jsonfile("overpass.osm_node.json",result.nodes) <jupyter_output>Nodes: 2267 , Write to: overpass.osm_node.json
<jupyter_text>#### 查看一下文件。<jupyter_code>!ls -l -h<jupyter_output>总用量 2.9M
-rw-rw-r-- 1 supermap supermap 26K 5月 4 15:20 osm-discovery.ipynb
-rw-rw-r-- 1 supermap supermap 5.6K 5月 4 15:27 osm-overpass.ipynb
-rw-rw-r-- 1 supermap supermap 15K 4月 23 08:23 osm-tag2json.ipynb
-rw-rw-r-- 1 supermap supermap 10 5月 4 15:17 osm_test.cpg
-rw-rw-r-- 1 supermap supermap 5.8K 5月 4 15:17 osm_test.dbf
-rw-rw-r-- 1 supermap supermap 2.7M 5月 4 15:00 osm_test.osm
-rw-rw-r-- 1 supermap supermap 380 5月 4 15:17 osm_test.shp
-rw-rw-r-- 1 supermap supermap 180 5月 4 15:17 osm_test.shx
-rw-rw-r-- 1 supermap supermap 131K 5月 4 15:27 overpass.osm_node.json
|
non_permissive
|
/geospatial/openstreetmap/osm-overpass-node.ipynb
|
supergis/git_notebook
| 8 |
<jupyter_start><jupyter_text>Import CSV diamonds_train<jupyter_code>diamonds_train = pd.read_csv('Input/diamonds_train.csv')<jupyter_output><empty_output><jupyter_text>## Checking dataIt is the same data as before so lets go ahead to clean and changeing types<jupyter_code>print(diamonds_train.shape)
diamonds_train.head()
diamonds_train.describe()<jupyter_output><empty_output><jupyter_text>Inspect nulls<jupyter_code>diamonds_train.isnull().sum()<jupyter_output><empty_output><jupyter_text>inspect types of data<jupyter_code>diamonds_train.dtypes<jupyter_output><empty_output><jupyter_text>inspect non numeric data<jupyter_code>diamonds_train['cut'].value_counts()<jupyter_output><empty_output><jupyter_text><jupyter_code>diamonds_train['clarity'].value_counts()<jupyter_output><empty_output><jupyter_text><jupyter_code>diamonds_train['color'].value_counts()<jupyter_output><empty_output><jupyter_text><jupyter_code>diamonds_train.drop(['Unnamed: 0'],axis=1,inplace=True)
diamonds_train.head()<jupyter_output><empty_output><jupyter_text>Transform object type to categorical ordinal ( cut - color - clarity)<jupyter_code>def categoricalOrdinal(x):
for key,value in cat_ord.items():
if x==key:
x=value
return x
return x<jupyter_output><empty_output><jupyter_text>1. Transform Color:<jupyter_code>cat_ord = {
"D":1,
"E":2,
"F":3,
"G":4,
"H":5,
"I":6,
"J":7
}
diamonds_train.color = diamonds_train.color.apply(categoricalOrdinal)<jupyter_output><empty_output><jupyter_text>2. Transform Clarity:<jupyter_code>cat_ord = {
"IF":1,
"VVS1":2,
"VVS2":3,
"VS1":4,
"VS2":5,
"SI1":6,
"SI2":7,
"I1":8
}
diamonds_train.clarity = diamonds_train.clarity.apply(categoricalOrdinal)<jupyter_output><empty_output><jupyter_text>3. Transform Cut:<jupyter_code>cat_ord = {
"Ideal":1,
"Premium":2,
"Very Good":3,
"Good":4,
"Fair":5
}
diamonds_train.cut = diamonds_train.cut.apply(categoricalOrdinal)<jupyter_output><empty_output><jupyter_text>Check new modification head:<jupyter_code>diamonds_train.head()<jupyter_output><empty_output><jupyter_text>Drop not relevant information X,Y,Z [Size similar weigth so we don't need it]<jupyter_code>diamonds_train.drop(['x','y','z'],axis=1,inplace=True)<jupyter_output><empty_output><jupyter_text>save dataset to new csv file<jupyter_code>diamonds_train.to_csv('Output/Clean/train_clean.csv')<jupyter_output><empty_output>
|
no_license
|
/Clean_Train_Data.ipynb
|
AlbertJlobera/Diamonds-Project
| 15 |
<jupyter_start><jupyter_text># EDA on dataset and preprocessing<jupyter_code># Importing Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import datetime as dt
sns.set(style="darkgrid")<jupyter_output><empty_output><jupyter_text>### Reading Extracted Data<jupyter_code># df = pd.read_csv("data4.csv")
df = pd.read_csv("flair_dataset.csv")
df.drop(['Unnamed: 0'], axis=1, inplace=True)
df.head()<jupyter_output><empty_output><jupyter_text>### Converting created date in datetime formate. This helps us to analyse post based on dates.<jupyter_code>df['Created_at'] = df['Created_at'].apply(lambda x : dt.datetime.fromtimestamp(x))
df.head()
# Some Information about dataset(df)
df.info()<jupyter_output><class 'pandas.core.frame.DataFrame'>
RangeIndex: 2765 entries, 0 to 2764
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Flair 2765 non-null object
1 Title 2765 non-null object
2 Created_at 2765 non-null datetime64[ns]
3 Score 2765 non-null int64
4 Url_address 2765 non-null object
5 Body 1584 non-null object
6 Author 2765 non-null object
7 num_comments 2765 non-null int64
8 combined_comments 2765 non-null object
dtypes: datetime64[ns](1), int64(2), object(6)
memory usage: 194.5+ KB
<jupyter_text>### Getting sum of all null values for each column in dataset. And ploting heatmap for that.
Helps Us to see NA values visually<jupyter_code>print(df.isnull().sum())
sns.heatmap(df.isnull())<jupyter_output>Flair 0
Title 0
Created_at 0
Score 0
Url_address 0
Body 1181
Author 0
num_comments 0
combined_comments 0
dtype: int64
<jupyter_text>### Filling null values with `" "`, so that it will be easier in data modeling. And finally ploting heatmap.<jupyter_code>df.Body.fillna('nan', inplace=True)
sns.heatmap(df.isnull())<jupyter_output><empty_output><jupyter_text>### Simple Plot to visualize each Flair count.<jupyter_code>print(df.Flair.value_counts())
plt.figure(figsize=(10,4))
plt.title("List of diffrent flairs with count.", fontsize=18)
plt.xlabel("Flairs", fontsize=15)
plt.ylabel("Flairs count", fontsize=15)
df.Flair.value_counts().plot(kind='bar', color='#26a69a')<jupyter_output>Coronavirus 251
Politics 247
Food 242
AskIndia 235
Scheduled 234
Business/Finance 233
Sports 231
Photography 222
Science/Technology 221
Policy/Economy 220
Non-Political 216
AMA 213
Name: Flair, dtype: int64
<jupyter_text>### Comment count for each flair.
By plotting we came to know that AskIndia is most popular among people, as name suggest people ask question and others give answer.<jupyter_code># Writing flairs in list form.
chosen_flair = [ 'Sports', 'Scheduled', 'Business/Finance', 'Food', 'Politics', 'Policy/Economy', 'Science/Technology', 'Photography', 'Non-Political', 'AskIndia', 'Coronavirus', 'AMA']
print("{:<20} {:<15} ".format('Flair', 'No. of comments'))
total_no_of_comments = {}
for flair in chosen_flair:
total_no_of_comments[flair] = df[df.Flair == flair]['num_comments'].sum()
print("{:<20} {:<15} ".format(flair, df[df.Flair == flair]['num_comments'].sum()))
plt.figure(figsize =(10,8))
plt.barh(np.arange(len(chosen_flair)), total_no_of_comments.values(), color='#26a69a', height=0.9)
plt.yticks(np.arange(len(total_no_of_comments)),list(total_no_of_comments.keys()), fontsize=12)
plt.xlabel("Total Number of comments for Flair", fontsize=15)
plt.ylabel("Flairs", fontsize=15)
plt.show()<jupyter_output>Flair No. of comments
Sports 8708
Scheduled 8701
Business/Finance 6943
Food 12585
Politics 13512
Policy/Economy 8272
Science/Technology 12184
Photography 5188
Non-Political 3942
AskIndia 36691
Coronavirus 36643
AMA 35066
<jupyter_text>### Ploting Number of Words in Title. This give general overview of length of title.
Average number of words for Title in post is around 82 with 300 as max length and 6 as min number of w<jupyter_code>length = df.Title.str.len()
_sum = 0
for i in length:
_sum+=i
print("{:<20} {:<15} ".format('Average No of words in Title ', _sum//len(length)))
print("{:<20} {:<15} ".format('Max No of words in Title ', max(length)))
print("{:<20} {:<15} ".format('Min No of words in Title ', min(length)))
length = df.Title.str.len()
plt.figure(figsize =(25,8))
plt.grid(False)
sns.distplot(length, bins=len(length)//10, color="green",kde = False)
plt.title("Number of words in Title", fontsize=14)
plt.xlabel("Title Length", fontsize=12)
plt.ylabel("Number of posts", fontsize=12)
plt.show()
length = df.Body.str.len()
plt.figure(figsize =(20,10))
sns.distplot(length, bins=len(length)//50, color="green",kde = False)
plt.title("Number of words in Body", fontsize=14)
plt.xlabel("Body Length", fontsize=12)
plt.ylabel("Number of posts", fontsize=12)
plt.show()<jupyter_output><empty_output><jupyter_text># Distribution of Sbmission with specific Flair over the Years.
From the plot shown below we can see that submission with given flairs are distributed over the years.
- Some flair like `Coronavirus` which came in existance this year which does not exist before.
- Some flair are topping the chart over others.
From the above observation if we collect data randomly or based on date than it would be imbalanced. So we took same amount of submission from each Flair catogary for fairer calassification.<jupyter_code>plt.figure(figsize =(25,12))
plt.plot_date(df.Created_at, df.Flair, color = "#00796b")
plt.xticks(rotation=10)
plt.title('Number of posts for given Flair in year(2010 - 2020)', fontsize=18)
plt.xlabel("Year", fontsize=16)
plt.ylabel("Flair", fontsize=16)
plt.show()<jupyter_output><empty_output><jupyter_text>## Cleaning Data.
Natural text have a lot different sentex or special character. Which does no good for classificatio, we can remove than which in tern optimize our model. Similarly we need to remove maximum frequency n-grams which are generally known as stopword. We pass title, body and comments in the fuction below.<jupyter_code>import nltk
from nltk.corpus import stopwords
import re
from bs4 import BeautifulSoup
replace_by_space = re.compile('[/(){}\[\]\|@,;]')
replace_symbol = re.compile('[^0-9a-z #+_]')
STOPWORDS = set(stopwords.words('english'))
def column_cleanup(text):
text = BeautifulSoup(text, "lxml").text # HTML decoding
text = text.lower() # lowercase text
text = replace_by_space.sub(' ', text) # replace certain symbols by space in text
text = replace_symbol.sub('', text) # delete symbols from text
text = ' '.join(word for word in text.split() if word not in STOPWORDS) # remove STOPWORDS from text
return text
def _str(text):
return str(text)
df['Title'] = df['Title'].apply(column_cleanup)
df['Body'] = df['Body'].apply(column_cleanup)
df['combined_comments'] = df['combined_comments'].apply(column_cleanup)
df['Title'] = df['Title'].apply(_str)
df['Body'] = df['Body'].apply(_str)
df['combined_comments'] = df['combined_comments'].apply(_str)
df.head(100)<jupyter_output><empty_output><jupyter_text># WordCloud
Word Cloud is a data visualization technique used for representing text data in which the size of each word indicates its frequency or importance.
Here we are generating WordCloud for each Flair.
[Refrence](https://www.geeksforgeeks.org/generating-word-cloud-python/)<jupyter_code>from wordcloud import WordCloud, STOPWORDS
idx =0
jdx=0
fig, ax = plt.subplots(3, 4, figsize=(20,10))
for flair in chosen_flair:
a = df['Title'][df.Flair == flair]
comment_words = ''
for val in a:
val = str(val)
tokens = val.split()
for i in range(len(tokens)):
tokens[i] = tokens[i].lower()
comment_words += " ".join(tokens)+" "
wordcloud = WordCloud(width = 4000, height = 3000, max_words=1000).generate(str(comment_words))
fig.suptitle('WordCloud for title of Each Flair')
ax[idx,jdx].set_title(flair, fontsize= 16)
ax[idx,jdx].grid(False)
ax[idx,jdx].axis(False)
ax[idx,jdx].imshow(wordcloud)
jdx+=1
if(jdx>3):
jdx=0
idx+=1<jupyter_output><empty_output><jupyter_text># Finnaly we save the cleaned data in `.csv` file <jupyter_code>df.to_csv("flair_dataset.csv")<jupyter_output><empty_output>
|
permissive
|
/Notebook/2. EDA.ipynb
|
ahmadkhan242/Reddit-flair-detection
| 12 |
<jupyter_start><jupyter_text>#BasemapsArcGIS Online includes several basemaps from Esri that you can use in your maps.<jupyter_code>from arcgis.gis import *
from IPython.display import display
gis = GIS()
basemaps = gis.content.search("tags:esri_basemap owner:esri", "web map")
for basemap in basemaps:
display(basemap)<jupyter_output><empty_output><jupyter_text>The Esri basemaps are included with the arcgis Map widget, and you can assign them dynamically:<jupyter_code>from arcgis.viz import MapView
map = MapView()
map.basemaps
map
import time
for basemap in map.basemaps:
map.basemap = basemap
time.sleep(5)<jupyter_output><empty_output><jupyter_text>Some partners and other users have also shared their basemaps for everyone to use:<jupyter_code>gis = GIS()
with gis:
stamenbasemaps = gis.content.search("tags:partner_basemap stamen", "web map", max_items=3)
for basemap in stamenbasemaps:
display(basemap)<jupyter_output><empty_output><jupyter_text>We can use these other basemaps in the Map widget as well by providing the basemap item to the MapView constructor:<jupyter_code>map2 = MapView(item=stamenbasemaps[1])
map2
map2.zoom = 12
location = gis.tools.geocoder.find_best_match('Mount St Helens')
print(location)
map2.center = location<jupyter_output>(46.191196893000495, -122.19439879099968)
|
no_license
|
/notebooks/06 Basemaps.ipynb
|
cunn1645/arcgis-python-api
| 4 |
<jupyter_start><jupyter_text>### Problem 1
Use MCMC with Gaussian proposal distribution to generate 10000 samples from the distribution:
$$p(x) = \alpha_1N(\mu_1, \sigma_1) + \alpha_2N(\mu_2, \sigma_2) + \alpha_2N(\mu_2, \sigma_2)$$
for selected $\alpha_i, \mu_i, \sigma_i$.
Compare to the regular sampling from the Gaussian Mixture Model.<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
#matplotlib inline
import scipy.stats as stats
alpha = np.array([2, 5, 3])/10
mu = np.array([-3, 5, 8])
sigma = np.array([3, 1, 1.5])
def gen_GMM():
t = np.random.random()
if t<alpha[0]:
idx = 0
elif t<alpha[0]+alpha[1]:
idx = 1
else:
idx = 2
return np.random.normal(loc=mu[idx], scale=sigma[idx])
x = np.zeros(10000)
for i in range(10000):
x[i] = gen_GMM()
plt.hist(x, bins=100, density=True)
plt.show()
def pGMM(x):
p = 0
for i in range(3):
p += alpha[i]*stats.norm.pdf(x, loc=mu[i], scale=sigma[i])
return p
s_proposal=2
x = np.zeros(10000)
x[0] = np.random.random()
for i in range(1, 10000):
x_new = x[i-1]+np.random.normal(loc=0, scale=s_proposal)
A = pGMM(x_new)/pGMM(x[i-1])
if np.random.random()<A:
x[i] = x_new
else:
x[i] = x[i-1]
plt.hist(x, bins=100, density=True)
plt.show()<jupyter_output><empty_output><jupyter_text>### Problem 2
Use Gibbs Sampling to sample uniformly from a set of binary $5 \times 5$ non-degenerate matrices.
Use `numpy.linalg.det`.<jupyter_code>A = np.eye(5)
print(A)
A = A.reshape(25,-1)
for j in range(100):
idx = np.random.randint(0, 25)
B = A.copy()
B[idx] = 1-B[idx]
if np.linalg.det(B.reshape(5,5))!=0:
A = B.copy()
if j%10 == 0:
print(A.reshape(5,5))<jupyter_output>[[1. 0. 0. 0. 1.]
[0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 0. 0. 1.]]
[[1. 1. 0. 1. 1.]
[0. 1. 0. 1. 0.]
[1. 1. 1. 1. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 0. 0. 1.]]
[[1. 0. 0. 1. 1.]
[1. 1. 0. 1. 0.]
[1. 1. 1. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 0. 0. 1.]]
[[1. 0. 1. 1. 1.]
[1. 1. 0. 1. 0.]
[0. 0. 1. 0. 0.]
[0. 0. 1. 1. 0.]
[0. 0. 1. 0. 1.]]
[[1. 0. 0. 0. 1.]
[1. 1. 0. 1. 1.]
[0. 0. 1. 0. 0.]
[0. 0. 1. 1. 1.]
[0. 0. 1. 0. 1.]]
[[1. 0. 0. 0. 0.]
[1. 1. 0. 1. 0.]
[0. 1. 1. 1. 0.]
[0. 0. 0. 1. 1.]
[0. 0. 1. 0. 1.]]
[[1. 0. 0. 0. 0.]
[1. 1. 0. 1. 0.]
[0. 1. 1. 1. 0.]
[1. 0. 0. 0. 1.]
[0. 1. 0. 0. 1.]]
[[1. 0. 0. 0. 1.]
[1. 1. 0. 1. 0.]
[0. 1. 1. 0. 0.]
[1. 0. 1. 1. 1.]
[1. 1. 0. 0. 1.]]
[[1. 0. 1. 0. 0.]
[1. 0. 0. 1. 0.]
[1. 1. 0. 0. 0.]
[1. 0. 1. 1. 1.]
[1. 1. 0. 0. 1.]]
[[1. 0. 1. 0. 0.]
[1. 1. 0. 1. 1.]
[1. 1. 0. 0. 0.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 0. 1.]]
<jupyter_text>### Problem 3
Select parameters for two non-collinear lines
$$y = a^{(i)}x + b^{(i)}$$
Generate 25 points on each line in the range $x \in [-10; 10]$, add random gaussian noise with zero mean and std $\sigma = 0.5$ to 50% of all the points. Add 10 random points not from the lines. Plot to verify that task looks reasonable. Combine all generated points into one array and treat it as an input data.
Use RANSAC to find best fit:
1. Select 2 pairs of random points
2. "Draw" two lines through these two pairs (find parameters)
3. Calculate avg MSE by assigning each point to the closest (vertical distance) line
4. Repeat N (sufficiently large) times and keep the best fit
Plot final solution and original lines for comparison.<jupyter_code>a = np.array([1, -1])
b = np.array([0, 0])
p = np.zeros((60, 2))
p[:25,0] = 20*np.random.random(size=25)-10
p[:25,1] = a[0]*p[:25,0]+b[0]+np.random.normal(scale=0.5, size=25)
p[25:50,0] = 20*np.random.random(size=25)-10
p[25:50,1] = a[1]*p[25:50,0]+b[1]+ np.random.normal(scale=0.5, size=25)
p[50:,:] = 20*np.random.random(size=(10,2))-10
x_base = np.linspace(-10,10,100)
plt.plot(p[:50,0], p[:50,1], 'bo')
plt.plot(x_base, a[0]*x_base+b[0], 'k-', x_base, a[1]*x_base+b[1], 'k-')
plt.plot(p[50:,0], p[50:,1], 'ro')
plt.plot()
N = 10
for i in range(N):
<jupyter_output><empty_output>
|
no_license
|
/practice_notebooks/Practice-Dec1.ipynb
|
maksimbolonkin/cs170-2020
| 3 |
<jupyter_start><jupyter_text>## Scraping Code### Function Definitions<jupyter_code># Ceiling function
def ceil(n) :
'''
Calculates the ceiling of a number.
Input :
n (float) - A real number for which you want to find the ceiling value.
Outpu :
n rounded up.
'''
if int(n) == n :
return n
else :
return int(n)+1
# Returns a number mod 40, but returns 40 instead of 0
# This is because of how the xpath for the tables on the homepage of kenpom.com is determined.
def mod_fix(n,r=40) :
'''
Calculates modular arithmitic, but replaces 0 with n.
Input :
n (int) - The dividend of the modular arithmitic.
r (int) - The divisor of the modular arithmitic.
Returns :
n % r , but n instead of 0
'''
if n%r == 0 :
return r
else :
return n%r
# Path to data
path = "../DATA/"
# Homepage for kenpom.com
home_url = "https://kenpom.com/"
def login(browser) :
'''
Opens kenpom.com in a chrome browser and prompts user for login information.
Once it logs in, the function exits, leaving the browser open.
Inputs :
browser - A web browser handle controlled by python.
'''
# Go to webpage
try :
browser.get(home_url)
# Enter user id
try :
# Get the email element
usr = browser.find_element_by_name('email')
# Empty it of contents
usr.clear()
# Prompt user for email
email = str(input("EMAIL: "))
# Type email into element
usr.send_keys(email)
except :
# Should a problem occur, don't do anything
print("Couldn't find email.")
# Raise an error to stop the function
raise ValueError("Field not found.")
# Enter password
try :
# Get the password element
pwd = browser.find_element_by_name('password')
# Clear it of current contents
pwd.clear()
# Prompt user for password
psswd = str(input("PASSWORD: "))
# Type password into element
pwd.send_keys(psswd)
except :
# Should a problem occur, don't do anything
print("Couldn't find password bar.")
# Raise an error to stop the function
raise ValueError("Field not found.")
# Click login button
try :
# Find the button
login = browser.find_element_by_name('submit')
# Click it
login.click()
except :
# If you couldn't for some reason, do nothing
print("Couldn't find login button.")
# Raise an error to stop the function
raise ValueError("Button not found.")
# Switch to new page
browser.switch_to_window(browser.window_handles[-1])
except :
# If the website doesn't exist for some reason, do nothing
print("Couldn't find home page.")
# Raise an error to stop the function
raise ValueError("Page not found.")
return browser
def get_scouting_report(browser,team_path) :
'''
Get the scouting report for a specific team from kenpom.com
Inputs :
browser - The browser we're controlling
team_path (str) - The directory path to the folder where we're saving the info.
'''
# Get scouting report table
# Turn the page into soup
team_soup = BeautifulSoup(browser.page_source, 'html.parser')
# Find the scouting report
report = team_soup.find('div', attrs={'id':'report'})
# Open an html file
with open(team_path+'Scouting_Report','w') as outfile :
# Save table to file
outfile.write(str(report))
def get_players(browser,current_team,team_path,year) :
'''
Gets data for the players of a specific team in a specific year
Inputs :
browser - The browser we're controlling
current_team (handle) - The handle for the team page
This is the page we want to return to when we're done.
team_path (str) - Path to where we're saving this team's data
year (str) - The current year we're scraping
'''
# The numbers in this range include all players, but some are missing different values.
# This will skip the missing ones and grab the ones that do exist.
for i in range(20) :
try :
# Player link by xpath
player = browser.find_element_by_xpath('//*[@id="player-table"]/tbody/tr[{}]/td[2]/a[1]'.format(i))
# Get the name of the player for the file name
#player_name = player.text
# Open link in new tab
player.send_keys(Keys.COMMAND + Keys.RETURN)
# Wait for tab to load
time.sleep(2)
# Switch view to new tab
browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + Keys.TAB)
# Switch browser to new tab
browser.switch_to_window(browser.window_handles[-1])
# Get player table and save to player file
player_soup = BeautifulSoup(browser.page_source, 'html.parser')
# Gets player name
player_name = player_soup.find_all('div',attrs={'id' : 'content-header'})[0].find_all('span',attrs={'class' : 'name'})[0].get_text()
# Creates the part of the path for where the player's data will be saved
#player_path = team_path + player_name
# Get schedule data
table = player_soup.find_all('table', attrs={'id':'schedule-table'})
# Check all tables if they're the ones we want
for i in range(len(table)) :
# This element appears above the table pertaining to the desired year's schedule
if browser.find_element_by_xpath('//*[@id="players"]/h3[{}]'.format(i+1)).text == year + " Game Data" :
# Create the file as path_to_team_folder/team_name/player_name #
# The reson for the # at the end is to verify later that the correct table
# was saved.
with open(team_path+player_name+' {}'.format(i),'w') as outfile :
# Write the file
outfile.write(str(table[i]))
# Close the player tab
browser.close()
# Switch to team tab
browser.switch_to_window(current_team)
# If the xpath was for an invalid player (some of them will be) do nothing
except NoSuchElementException :
pass
def get_years(years) :
'''
Goes through kenpom.com year by year for the years provided to scrape data
Inputs :
years (list) - List of years we want to scrape (strings)
'''
# For each year of interest
for year in years :
# Don't want to overload the server
time.sleep(1)
# Get the link for given year
try :
# Find the year element
link = browser.find_element_by_link_text(year)
# Click it
link.click()
# Switch control to the new page
browser.switch_to_window(browser.window_handles[-1])
# Save page, because we'll be returning to it a lot
current_year = browser.current_window_handle
# Create folder for year if not already in existence
if not os.path.exists(path+year+'/') :
os.makedirs(path+year+'/')
# Initialize list of teams
teams = []
# Counter for specific xpath
i = 1
# 68 teams (round of 64 and first 4 gives 4 additional)
while len(teams) < 68 and i < 350:
# Can't hurt to treat the server gently
time.sleep(1)
try :
# Team link by xpath
team = browser.find_element_by_xpath('//*[@id="ratings-table"]/tbody[{}]/tr[{}]/td[2]/a'.format(ceil(i/40),mod_fix(i)))
# Get name of team
team_name = team.text
# Open team link in new tab
team.send_keys(Keys.COMMAND + Keys.RETURN)
# Wait for tab to load
time.sleep(2)
# Switch to new tab
browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + Keys.TAB)
# Update browser to focus on new tab
browser.switch_to_window(browser.window_handles[-1])
# Save this team page so that it can be returned to later
current_team = browser.current_window_handle
# Set default to True, so that if the xpath exists, we don't need to change anything
in_tourn = True
# Try to find a game that was played in the tournament
try :
tournament_tag = browser.find_element_by_xpath("//*[contains(text(), 'NCAA Tournament')]")
except :
# If we failed, they weren't in the tournament, so we can ignore them
in_tourn = False
# If in the Tournament, get data
if in_tourn :
# Keep track of teams already in tournament
teams.append(team_name)
# Create path to team data
team_path = path+year+'/'+teams[-1]+'/'
# If it doesn't exist already, make a new folder
if not os.path.exists(team_path) :
os.makedirs(team_path)
# Grab the scouting report
get_scouting_report(browser,team_path)
# Grab player data
get_players(browser,current_team,team_path,year)
# Close tab
browser.close()
# Switch back to year tab
browser.switch_to_window(current_year)
# If the specific xpath doesn't exist, don't do anything, go to the next one
except NoSuchElementException :
pass
# Keep track of how many teams we've tried.
# As there are a finite amount of teams, we can stop once we've tried enough.
i = i + 1
except :
# If there was a problem with a given year, let me know
print("Couldn't get data for {}".format(year))<jupyter_output><empty_output><jupyter_text>### Code to Scrape<jupyter_code># Open chrome
browser = webdriver.Chrome('./chromedriver')
# Login
login(browser)
# Years of interest
years = ['2013', '2014','2015','2016','2017']
# Get data
get_years(years)
# Pause
time.sleep(2)
# Close browser
browser.close()<jupyter_output><empty_output><jupyter_text>## Cleaning Code### Clean Bad Data<jupyter_code># Where I'm storing the data
base_path = '../DATA/2013/North Carolina A&T'
# Possible table numbers from scraper
table_numbers = ['0','1','2','3']
# Walk through scraped data
for dir_path , dir_name , file_names in os.walk(base_path) :
for name in file_names :
# Scouting Report formatting
if name == 'Scouting_Report' :
with open(os.path.join(dir_path,name),'r') as infile :
report = infile.read()
# Remove extra information
report = report[332:358] + report[496:]
# Read scouting report
team = pds.read_html(report)[0]
# Rename columns
team.columns = ['Category','Offense','Defense','D-I Avg.']
# Drop columns without data
team = team.dropna(thresh=2)
# Fix certain rows with unapplicable columns
for ind in [1,24,25,27,28,29,30] :
# Remove ranking number in offense
team.set_value(ind,'Offense',(team.loc[ind]['Offense']).split(' ')[0])
# Trim superfluous symbols from offense stats in these rows
if team.loc[ind]['Offense'][0] == '+' :
team.set_value(ind,'Offense',team.loc[ind]['Offense'][1:])
elif team.loc[ind]['Offense'][-1] == r'%' :
team.set_value(ind,'Offense',team.loc[ind]['Offense'][:-1])
elif team.loc[ind]['Offense'][-1] == r'"' :
team.set_value(ind,'Offense',team.loc[ind]['Offense'][:-1])
# Convert the string to a float
team.set_value(ind,'Offense',float(team.loc[ind]['Offense']))
# Remove ranking number in defense
team.set_value(ind,'Defense',(team.loc[ind]['Defense']).split(' ')[0])
# Trim superfluous symbols from defense stats in these rows
if team.loc[ind]['Defense'][-1] == r'%' :
team.set_value(ind,'Defense',team.loc[ind]['Defense'][:-1])
elif team.loc[ind]['Defense'][-1] == r'"' :
team.set_value(ind,'Defense',team.loc[ind]['Defense'][:-1])
# The 'defense' stat should really be the D-I Avg. stat
team.set_value(ind,'D-I Avg.',team.loc[ind]['Defense'])
# The actual defense stat is the same as the offense stat
team.set_value(ind,'Defense',team.loc[ind]['Offense'])
# Fill in any not applicables with an appropriate string
team = team.fillna('N/A')
# Remove ranking from offense and defense values for other columns
for ind in team.index :
if ind not in [1,17,24,25,27,28,29,30] :
team.set_value(ind,'Offense',(team.loc[ind]['Offense']).split(' ')[0])
team.set_value(ind,'Offense',float(team.loc[ind]['Offense']))
team.set_value(ind,'Defense',(team.loc[ind]['Defense']).split(' ')[0])
# 17 is the special case where they're all the same
elif ind == 17 :
team.set_value(ind,'Defense',team.loc[ind]['Offense'])
team.set_value(ind,'D-I Avg.',team.loc[ind]['Offense'])
# Filter to remove extra indicie columns
team = team.filter(['Offense','Defense','D-I Avg.'])
# Save
team.to_csv(os.path.join(dir_path,name) + '_csv')
elif name[-1] in table_numbers :
with open(os.path.join(dir_path,name),'r') as infile :
table = infile.read()
# Remove empty superfluous <tbody> attribute
table = table[:28] + table[44:]
# Read in html
player = pds.read_html(table)[0]
# Replace unnamed columns and filter out unnecessary data
new_columns = list(player.columns)
new_columns[1] = 'Date'
new_columns[5] = 'OTs'
new_columns[7] = 'Conference'
player.columns = new_columns
# Change "Did not play" status to 0 values
for ind in player.index :
if player.loc[ind]['St'] == "Did not play" :
player.set_value(ind,'MP',0)
player.set_value(ind,'ORtg',0)
player.set_value(ind,'%Ps',0)
player.set_value(ind,'Pts',0)
player.loc[ind,'2Pt'] = '0-0'
player.loc[ind,'3Pt'] = '0-0'
player.loc[ind,'FT'] = '0-0'
player.set_value(ind,'OR',0)
player.set_value(ind,'DR',0)
player.set_value(ind,'A',0)
player.set_value(ind,'TO',0)
player.set_value(ind,'Blk',0)
player.set_value(ind,'Stl',0)
player.set_value(ind,'PF',0)
# Filter based on important stats
player = player.filter(['Date','Opponent','Result','OTs','Site',
'Conference','MP','ORtg','%Ps','Pts','2Pt',
'3Pt','FT','OR','DR','A','TO','Blk','Stl','PF'])
# Drop NaN rows
player = player.dropna(thresh=6)
# Switch NaN to '0OT' in the OTs column
player = player.fillna('0OT')
# Switches - to 0
player = player.replace('-',0)
# Save
player.to_csv(os.path.join(dir_path,name) + '_csv')<jupyter_output><empty_output><jupyter_text>### Add Additional Data<jupyter_code># Load DATA
path = './DATA/'
# Split function (for 2-pt, 3-pt, and ft)
def split(ratio) :
'''
Takes a list of number pairs separated by '-' and splits it into two lists, first and last
Inputs :
ratio (list) - A list of number pairs
Outputs :
made (list) - A list of the first numbers
attempt (list) - A list of the last numbers
'''
values = [value.split('-') for value in ratio]
made , attempt = [int(shots[0]) for shots in values], [int(shots[1]) for shots in values]
return made, attempt
# Different types of shots
shot_types = ['2Pt','3Pt','FT']
# Walk through player files
for dir_path , dir_name , file_names in os.walk(path) :
# List of players
players = {}
for name in file_names :
# Only worry about cleaned data
if name[-3:] == 'csv' :
# Don't get the Scouting report
if name[:8] != 'Scouting' :
# Read player data
players[name[:-6]] = pd.read_csv(os.path.join(dir_path,name))
# Empty dict for storing team totals later
team_values = {}
# Get total team values
for player in players.keys() :
for shot_type in shot_types :
# Split the number of made and attempted
made , attempt = split(players[player][shot_type].values)
# Add the number of shots player attempted to total team shots for that game
if shot_type in team_values.keys() :
team_values[shot_type] = [team_values[shot_type][i] + attempt[i] for i in range(len(attempt))]
else :
team_values[shot_type] = attempt
# Create the percentage tab
players[player][shot_type+' %'] = [made[i] / attempt[i] if attempt[i] != 0 else 0 for i in range(len(attempt))]
# Add %Att for 2s, 3s, and FT
for player in players.keys() :
for shot_type in shot_types :
# Split number of made and attempted
made , attempt = split(players[player][shot_type].values)
# Get list of percentages
perc_att = [attempt[i] / team_values[shot_type][i] if team_values[shot_type][i] != 0 else 0 for i in range(len(attempt))]
# Create the new column
players[player][shot_type+' %Att'] = perc_att
# Calculate apprx points prevented from blocks and steals
points_prev = [2*(players[player].loc[i]['Blk'] + players[player].loc[i]['Stl'] - players[player].loc[i]['TO']) for i in players[player].index]
# Add data to player
players[player]['Pnts-Prev'] = points_prev
# Get results for point margin
res = players[player]['Result']
# Gets 'W' for win and 'L' for loss
result = [res[i][0] for i in range(len(res))]
# Resets it to exclude the 'W' or 'L'
res = [res[i][3:] for i in range(len(res))]
# Split the scores
score_1 , score_2 = split(res)
# Creates the margin list
margin = [abs(score_1[i]-score_2[i]) if result[i]=='W' else -abs(score_1[i]-score_2[i]) for i in range(len(score_1))]
# Adds the margin column
players[player]['Marg'] = margin
# Refilter to remove extra indices
players[player] = players[player].filter(['Date','Opponent','Result','OTs','Site',
'Conference','MP','ORtg','%Ps','Pts','2Pt',
'3Pt','FT','OR','DR','A','TO','Blk','Stl','PF',
'2Pt %','3Pt %','FT %','2Pt %Att','3Pt %Att',
'FT %Att','Pnts-Prev','Marg'])
# Save new file
players[player].to_csv(os.path.join(dir_path,player)+'_adj')<jupyter_output><empty_output><jupyter_text>### Create Player Avgs File
The following walks through each team for every year and creates a single file that records a player and their season avgs. It also adds a new column for each player signifying whether they are considered a "major contributer" or not (0 for yes, 1 for no).<jupyter_code>path = './DATA/'
# Walk through player files
for dir_path , dir_name , file_names in os.walk(path) :
# List of players
players = {}
for name in file_names :
# Only worry about adjusted data
if name[-3:] == 'adj' :
# Read player data
players[name[:-4]] = pd.read_csv(os.path.join(dir_path,name))
# Get avgs
cols = ['MP','ORtg','%Ps','Pts','OR','DR','A','TO','Blk','Stl','PF',
'2Pt %','3Pt %','FT %','2Pt %Att','3Pt %Att','FT %Att',
'Pnts-Prev','Marg','Maj Cont']
if dir_path[-4:-1] != '201' and dir_path[-5:-1] != 'DATA' :
# Set up empty dataframe
avgs = pd.DataFrame(columns=cols)
# Shrink columns because others don't have the last column (major contributor)
cols = cols[:-1]
for player in players.keys() :
# Get their prominance in the tournament this year
# This is determined by taking their average % of possessions
# used during the tournament
# 0 - 00-12% (Benchwarmer)
# 1 - 12-16% (Limited role)
# 2 - 16-20% (Role player)
# 3 - 20-24% (Significant role)
# 4 - >= 24% (Major contributor)
# Alternatively, you can rate their prominance as :
# 0 - <=10% (Practically did not contribute in tournament)
# 1 - >10% (Did contribute in tournament)
# Both of these are coded, but one should be commented out.
df = players[player]
# The mask is for finished years only, when we can determine how much they contributed
mask = df['Conference'] == 'NCAA-T'
tourn_nums = df['%Ps']*mask
tourn_games = np.count_nonzero(mask.astype(int))
perc_poss = sum(tourn_nums)/tourn_games
# Get numerical columns only
data = players[player].filter(cols)
n = len(data.index)
# Compute avgs (non tournament)
plyr_avgs = [sum(data[col].astype(float)*(1-mask))/n for col in cols]
# Assign prominance by comment at beginning of for loop
'''
if perc_poss < 12 :
plyr_avgs.append(0)
elif perc_poss < 16 :
plyr_avgs.append(1)
elif perc_poss < 20 :
plyr_avgs.append(2)
elif perc_poss < 24 :
plyr_avgs.append(3)
else :
plyr_avgs.append(4)
'''
if perc_poss >= 10 :
plyr_avgs.append(1)
else :
plyr_avgs.append(0)
avgs.loc[player] = plyr_avgs
avgs.to_csv(dir_path+'/Player avgs')<jupyter_output><empty_output>
|
no_license
|
/Scraping and Cleaning Code.ipynb
|
dslunde/March_Madness
| 5 |
<jupyter_start><jupyter_text>## Two Samples z-test for Proportions
## $z = \frac{\hat{p_1}-\hat{p_2}}{\sqrt{\hat{p} (1-\hat{p}) (\frac{1}{n_1} + \frac{1}{n_2})}} $
where
### $\hat{p_1} = \frac{x_1}{n_1}, \hat{p_2} = \frac{x_2}{n_2} $
### $\hat{p} = \frac{x_1 + x_2}{n_1 + n_2}$
$x_1, x_2$ - number of successes in group 1 and 2
$n_1, n_2$ - number of observations in group 1 and 2<jupyter_code># implementation from scratch
def ztest_proportion_two_samples(x1, n1, x2, n2, one_sided=True, verbose=False):
p1 = x1/n1
p2 = x2/n2
p = (x1+x2)/(n1+n2)
denom = sqrt(p*(1-p)*(1/n1+1/n2))
z = (p1-p2)/denom
p = 1-stats.norm.cdf(abs(z))
p *= 2-one_sided # if not one_sided: p *= 2
if verbose:
print(xa, na, xb, nb)
print('z-stat = {z}'.format(z=z))
print('p-value = {p}'.format(p=p))
return p
def CR_stat_test(data, k, verbose=False):
d = data.iloc[:k, :]
xa = d[d.variant == 0].conversion.sum()
na = d[d.variant == 0].conversion.count()
xb = d[d.variant == 1].conversion.sum()
nb = d[d.variant == 1].conversion.count()
p = ztest_proportion_two_samples(xa, na, xb, nb, one_sided=False, verbose=verbose)
return p, k
CR_stat_test(data, 10000, verbose=True)
pv = [CR_stat_test(data, 10000*i, verbose=False) for i in range(1,150)]
# P-value versus sample size
fig, ax = plt.subplots(figsize=(14,6))
plot([dot[1] for dot in pv], [dot[0] for dot in pv])
plot([0, 2000000], [0.05, 0.05], color='red', linestyle='dashed')
title('P-Value', fontdict={'size':16})
xlabel('split size')
plt.grid(True)
xlim(0, 1400000)
ylim(0,1)
# using statsmodels
from statsmodels.stats.proportion import proportions_ztest
count = np.array([xa, xb])
nobs = np.array([na, nb])
z,p = proportions_ztest(count, nobs, value=0, alternative='two-sided')
print(' z-stat = {z} \n p-value = {p}'.format(z=z,p=p))
data.sort_values(by='ord_value',ascending=False).head()
from scipy.stats import mannwhitneyu
print(mannwhitneyu.__doc__)
#d = data
k = 120000
d = data.iloc[:k, :]
test = d[d.variant == 1].ord_value
control = d[d.variant == 0].ord_value
mannwhitneyu(test, control, alternative='greater')[1]<jupyter_output><empty_output>
|
no_license
|
/content_from_npl_git/materials/seminar_ab_testing/2019-06-06_AB tutorial.ipynb
|
Vdyuk/newprolab-10.0
| 1 |
<jupyter_start><jupyter_text># Exercises 3## Part 1
- Import Numpy, matplotlib.pyplot and skimage.data
- From the data module import the moon picture
- Plot that image
- Check the image dimensions
- Calculate the image mean, max and min
- Create a mask of pixels with values above half of the max
- Create a cropped version of the image containing the crater on the top left
- Crop the mask as well
- Plot the cropped mask on top of the cropped image using transparency and the colormaps you want.
- Create a list of the pixels and count them
## Part 2
- Find how to generate normally distributed numbers (within numpy.random module)
- Create a matrix of the same size as the cropped images with normally distributed values
- Plot it
- Add the random matrix to the cropped image, and plot it.
- Adjust the noise level by multiplying the random matrix with different factors (1, 10, 100). Plot the result
- From skimage.data load the rocket image
- Check its size
- Crop the rocket image to the same size as the moon image
- Replace the first channel of the cropped rocket image, with the image of the cropped moon
- Plot the result# Solutions 3## Part 1<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
import skimage.data
#load moon image
image = skimage.data.moon()
plt.imshow(image, cmap = 'gray')
plt.show()
#get shape of image
image.shape
#get infos about image values
np.mean(image)
image.mean()
image.max()
image.min()
maxval = image.max()
#create a mask of pixels higher than 0.5*(max value)
mask = image>0.5*maxval
plt.imshow(mask)
plt.show()
#crop an image region
image_crop = image[50:150,50:150]
mask_crop = mask[50:150,50:150]
#superpose image and mask
plt.imshow(image_crop,cmap = 'gray')
plt.imshow(mask_crop,cmap = 'Reds', alpha = 0.5)
plt.show()
#get only positive mask pixels
pixels = image_crop[mask_crop]
len(pixels)<jupyter_output><empty_output><jupyter_text>## Part 2<jupyter_code>#find the function to generate normally distributed samples (Google is your best friend)
np.random.standard_normal?
image_crop.shape[1]
#generate a matrix of the right size
normal_matrix = np.random.standard_normal((image_crop.shape[0], image_crop.shape[1]))
plt.imshow(normal_matrix)
plt.show()
im1 = image_crop +100* normal_matrix
plt.imshow(im1)
plt.show()
im1 = image_crop + 10*normal_matrix
plt.imshow(im1)
plt.show()
rocket = skimage.data.rocket()
rocket.shape
rocket_cropped = rocket[50:150,50:150,:]
rocket_cropped.shape
rocket_cropped[:,:,0] = image_crop
plt.imshow(rocket_cropped)
plt.show()<jupyter_output><empty_output>
|
no_license
|
/Exercises/Exercise3.ipynb
|
guiwitz/PyImageCourse
| 2 |
<jupyter_start><jupyter_text># Imports<jupyter_code>import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as datasets
import torch.utils.data as data
import torchvision.transforms as transforms<jupyter_output><empty_output><jupyter_text># Constants definition<jupyter_code>EPOCHS = 5
IMAGE_SIZE = 28
# Keras's default learning rate is 1e-3
LEARNING_RATE = 1e-3
# Keras's default batch size is 32
BATCH_SIZE = 32<jupyter_output><empty_output><jupyter_text># Providing the data<jupyter_code>transform = transforms.ToTensor()
train_dataset = datasets.FashionMNIST('../data/fashionMNIST', train=True, download=True, transform=transform)
test_dataset = datasets.FashionMNIST('../data/fashionMNIST', train=False, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=BATCH_SIZE)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=BATCH_SIZE)<jupyter_output><empty_output><jupyter_text># Define and Compile the Neural Network<jupyter_code># CrossEntropyLoss in PyTorch assumes unnormalized values, thus Softmax should not be used
# https://stackoverflow.com/a/61438119
model = nn.Sequential(
nn.Flatten(),
nn.Linear(IMAGE_SIZE * IMAGE_SIZE, 128),
nn.ReLU(),
nn.Linear(128, 10)
)
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
loss_fn = nn.CrossEntropyLoss()<jupyter_output><empty_output><jupyter_text># Training the Neural Network<jupyter_code>for epoch in range(EPOCHS):
print(f'Epoch {epoch+1}/{EPOCHS}')
loss_sum = 0
correct = 0
for images, labels in train_loader:
optimizer.zero_grad()
out = model(images)
loss = loss_fn(out, labels)
loss.backward()
optimizer.step()
_, predicted = torch.max(out.data, 1)
correct += (predicted == labels).sum().item()
loss_sum += loss.item()
print(f'loss: {loss_sum / len(train_loader):e} - acc: {correct / len(train_dataset)}')<jupyter_output>Epoch 1/5
loss: 5.168233e-01 - acc: 0.8178666666666666
Epoch 2/5
loss: 3.814406e-01 - acc: 0.8627333333333334
Epoch 3/5
loss: 3.408272e-01 - acc: 0.8759166666666667
Epoch 4/5
loss: 3.166639e-01 - acc: 0.8841666666666667
Epoch 5/5
loss: 2.988156e-01 - acc: 0.88985
<jupyter_text># Evaluating the result<jupyter_code>correct = 0
with torch.no_grad():
for images, labels in test_loader:
out = model(images)
_, predicted = torch.max(out.data, 1)
correct += (predicted == labels).sum().item()
print(f'Test accuracy: {100 * correct / len(test_dataset)}%')<jupyter_output>Test accuracy: 86.78%
|
no_license
|
/Ep #2 - First steps in computer vision/Example1.ipynb
|
bigboynaruto/machine-learning-foundations-pytorch
| 6 |
<jupyter_start><jupyter_text>## Spectral centroid with Librosa<jupyter_code>FRAME_SIZE = 1024
HOP_LENGTH = 512
sc_debussy = librosa.feature.spectral_centroid(y=debussy,
sr=sr,
n_fft=FRAME_SIZE,
hop_length=HOP_LENGTH)[0]
sc_redhot = librosa.feature.spectral_centroid(y=redhot,
sr=sr,
n_fft=FRAME_SIZE,
hop_length=HOP_LENGTH)[0]
sc_duke = librosa.feature.spectral_centroid(y=duke,
sr=sr,
n_fft=FRAME_SIZE,
hop_length=HOP_LENGTH)[0]
frames = range(len(sc_debussy))
t = librosa.frames_to_time(frames, hop_length=HOP_LENGTH)
plt.figure(figsize=(25, 10))
plt.plot(t, sc_debussy, c='b')
plt.plot(t, sc_redhot, c='r')
plt.plot(t, sc_duke, c='y')
plt.show()<jupyter_output><empty_output><jupyter_text>## Spectral bandwidth with Librosa<jupyter_code>sb_debussy = librosa.feature.spectral_bandwidth(y=debussy,
sr=sr,
n_fft=FRAME_SIZE,
hop_length=HOP_LENGTH)[0]
sb_redhot = librosa.feature.spectral_bandwidth(y=redhot,
sr=sr,
n_fft=FRAME_SIZE,
hop_length=HOP_LENGTH)[0]
sb_duke = librosa.feature.spectral_bandwidth(y=duke,
sr=sr,
n_fft=FRAME_SIZE,
hop_length=HOP_LENGTH)[0]
plt.figure(figsize=(25, 10))
plt.plot(t, sb_debussy, c='b')
plt.plot(t, sb_redhot, c='r')
plt.plot(t, sb_duke, c='y')
plt.show()<jupyter_output><empty_output>
|
no_license
|
/AudioSignalProcessing/008 - Spectral Centroid and Bandwidth.ipynb
|
nathzi1505/AudioML
| 2 |
<jupyter_start><jupyter_text># GRIP - The Sparks Foundation# TASK 6# Prediction using Decision Tree Algorithm### Decision Tree
A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes).### Author - Amisha Singh<jupyter_code># Importing libraries in Python
import sklearn.datasets as datasets
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Loading the iris dataset
iris = pd.read_csv(r'C:\Users\Amisha\Downloads\Iris.csv')
# Forming the Iris dataframe
X = iris.iloc[:, 1:-1].values
y = iris.iloc[:, -1].values
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Defining the decision tree algorithm
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
print('Decision Tree Classifer Created')
# Predicting the Test set results
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
# Predicting a new result
print(classifier.predict(sc.transform([[5.2, 3.6, 1.5, 0.4]])))<jupyter_output>['Iris-setosa']
|
no_license
|
/GRIP TASK 6.ipynb
|
AmishaSingh0210/silver-bassoon
| 1 |
<jupyter_start><jupyter_text>**Please open with Jupyter notebook.
Because COLAB can't access the webcam by OpenCV.**<jupyter_code>import cv2
import numpy as np
from keras.models import load_model
## We load the model
model=load_model("./model/face_mask_detector_model.h5")
results={0:'NO MASK', 1:'MASK'}
colors={0:(0,0,255), 1:(0,255,0)} #color
rect_size = 4
# Use camera 0
cap = cv2.VideoCapture(0)
# We load the xml file
haarcascade = cv2.CascadeClassifier('./face_detector/haarcascade_frontalface_default.xml')
while True:
# image capture
(rval, image) = cap.read()
#Flip to act as a mirror
image = cv2.flip(image,1,1)
# print our group name
cv2.rectangle(image,
(0,0),
(640,35),
(245, 165, 66),
-1)
cv2.putText(image,
'NeuralBit, PUC',
(10, 25),
cv2.FONT_HERSHEY_SIMPLEX,
0.8,
(255,255,255),
2)
# Resize the image to speed up detection
rerect_size = cv2.resize(image,
(image.shape[1] // rect_size,
image.shape[0] // rect_size))
# detect MultiScale / faces
faces = haarcascade.detectMultiScale(rerect_size)
# Draw rectangles around each face
for f in faces:
#Scale the shapesize backup
(x, y, w, h) = [v * rect_size for v in f]
#Scale the shapesize backup
face_img = image[y:y+h, x:x+w]
rerect_sized = cv2.resize(face_img,(150,150))
normalized = rerect_sized/255.0
reshaped = np.reshape(normalized,(1,150,150,3))
reshaped = np.vstack([reshaped])
# prediction
result=model.predict(reshaped)
accu = round(np.max(result)*100, 2)
label = np.argmax(result,axis=1)[0]
cv2.rectangle(image, (x,y), (x+w,y+h), colors[label],1) #rectangle on face
cv2.rectangle(image, (x,y-30), (x+w,y), colors[label],-1) #label_background
cv2.putText(image, results[label]+' '+str(accu)+'%', (x+5, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.7,(255,255,255),1)
# Show the image
cv2.imshow('NeuralBit | Face Mask Detector', image)
# Press Q on keyboard to exit window
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Stop video
cap.release()
# Close all started windows
cv2.destroyAllWindows()
<jupyter_output><empty_output>
|
no_license
|
/Part2-Face-Mask_Detector.ipynb
|
iammeskat/real-time-face-mask-detection
| 1 |
<jupyter_start><jupyter_text><jupyter_code>#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
print(tf.__version__)
# EXPECTED OUTPUT
# 2.0.0-beta1 (or later)
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.1,
np.cos(season_time * 7 * np.pi),
1 / np.exp(5 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.01
noise_level = 2
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
# EXPECTED OUTPUT
# Chart as in the screencast. First should have 5 distinctive 'peaks'<jupyter_output><empty_output><jupyter_text>Now that we have the time series, let's split it so we can start forecasting<jupyter_code>split_time = 1100
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
plt.figure(figsize=(10, 6))
plot_series(time_train, x_train)
plt.show()
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plt.show()
# EXPECTED OUTPUT
# Chart WITH 4 PEAKS between 50 and 65 and 3 troughs between -12 and 0
# Chart with 2 Peaks, first at slightly above 60, last at a little more than that, should also have a single trough at about 0<jupyter_output><empty_output><jupyter_text># Naive Forecast<jupyter_code>naive_forecast = series[split_time - 1:-1]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, naive_forecast)
# Expected output: Chart similar to above, but with forecast overlay<jupyter_output><empty_output><jupyter_text>Let's zoom in on the start of the validation period:<jupyter_code>plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150)
plot_series(time_valid, naive_forecast, start=1, end=151)
# EXPECTED - Chart with X-Axis from 1100-1250 and Y Axes with series value and projections. Projections should be time stepped 1 unit 'after' series<jupyter_output><empty_output><jupyter_text>Now let's compute the mean squared error and the mean absolute error between the forecasts and the predictions in the validation period:<jupyter_code>print(keras.metrics.mean_squared_error(x_valid, naive_forecast).numpy())
print(keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy())
# Expected Output
# 19.578304
# 2.6011968<jupyter_output>19.578308
2.6011975
<jupyter_text>That's our baseline, now let's try a moving average:<jupyter_code>def moving_average_forecast(series, window_size):
"""Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast"""
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return np.array(forecast)
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, moving_avg)
# EXPECTED OUTPUT
# CHart with time series from 1100->1450+ on X
# Time series plotted
# Moving average plotted over it
print(keras.metrics.mean_squared_error(x_valid, moving_avg).numpy())
print(keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy())
# EXPECTED OUTPUT
# 65.786224
# 4.3040023
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series)
plt.show()
# EXPECETED OUTPUT: CHart with diffs<jupyter_output><empty_output><jupyter_text>Great, the trend and seasonality seem to be gone, so now we can use the moving average:<jupyter_code>diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:])
plot_series(time_valid, diff_moving_avg)
plt.show()
# Expected output. Diff chart from 1100->1450 +
# Overlaid with moving average<jupyter_output><empty_output><jupyter_text>Now let's bring back the trend and seasonality by adding the past values from t – 365:<jupyter_code>diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, diff_moving_avg_plus_past)
plt.show()
# Expected output: Chart from 1100->1450+ on X. Same chart as earlier for time series, but projection overlaid looks close in value to it
print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_past).numpy())
print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy())
# EXPECTED OUTPUT
# 8.498155
# 2.327179<jupyter_output>8.498155
2.3271792
<jupyter_text>Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise:<jupyter_code>diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-360], 10) + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, diff_moving_avg_plus_smooth_past)
plt.show()
# EXPECTED OUTPUT:
# Similar chart to above, but the overlaid projections are much smoother
print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())
print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())
# EXPECTED OUTPUT
# 12.527958
# 2.2034433<jupyter_output>12.527958
2.2034435
|
permissive
|
/Sequences, Time Series and Prediction/Week_1_Exercise_Answer.ipynb
|
thliang01/Learn-Machine-Learning
| 9 |
<jupyter_start><jupyter_text>
# **SpaceX Falcon 9 first stage Landing Prediction**
# Lab 1: Collecting the data
Estimated time needed: **45** minutes
In this capstone, we will predict if the Falcon 9 first stage will land successfully. SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars; other providers cost upward of 165 million dollars each, much of the savings is because SpaceX can reuse the first stage. Therefore if we can determine if the first stage will land, we can determine the cost of a launch. This information can be used if an alternate company wants to bid against SpaceX for a rocket launch. In this lab, you will collect and make sure the data is in the correct format from an API. The following is an example of a successful and launch.

Several examples of an unsuccessful landing are shown here:

Most unsuccessful landings are planned. Space X performs a controlled landing in the oceans.
## Objectives
In this lab, you will make a get request to the SpaceX API. You will also do some basic data wrangling and formating.
* Request to the SpaceX API
* Clean the requested data
***
## Import Libraries and Define Auxiliary Functions
We will import the following libraries into the lab
<jupyter_code># Requests allows us to make HTTP requests which we will use to get data from an API
import requests
# Pandas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
# NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
# Datetime is a library that allows us to represent dates
import datetime
# Setting this option will print all collumns of a dataframe
pd.set_option('display.max_columns', None)
# Setting this option will print all of the data in a feature
pd.set_option('display.max_colwidth', None)<jupyter_output><empty_output><jupyter_text>Below we will define a series of helper functions that will help us use the API to extract information using identification numbers in the launch data.
From the rocket column we would like to learn the booster name.
<jupyter_code># Takes the dataset and uses the rocket column to call the API and append the data to the list
def getBoosterVersion(data):
for x in data['rocket']:
response = requests.get("https://api.spacexdata.com/v4/rockets/"+str(x)).json()
BoosterVersion.append(response['name'])<jupyter_output><empty_output><jupyter_text>From the launchpad we would like to know the name of the launch site being used, the logitude, and the latitude.
<jupyter_code># Takes the dataset and uses the launchpad column to call the API and append the data to the list
def getLaunchSite(data):
for x in data['launchpad']:
response = requests.get("https://api.spacexdata.com/v4/launchpads/"+str(x)).json()
Longitude.append(response['longitude'])
Latitude.append(response['latitude'])
LaunchSite.append(response['name'])<jupyter_output><empty_output><jupyter_text>From the payload we would like to learn the mass of the payload and the orbit that it is going to.
<jupyter_code># Takes the dataset and uses the payloads column to call the API and append the data to the lists
def getPayloadData(data):
for load in data['payloads']:
response = requests.get("https://api.spacexdata.com/v4/payloads/"+load).json()
PayloadMass.append(response['mass_kg'])
Orbit.append(response['orbit'])<jupyter_output><empty_output><jupyter_text>From cores we would like to learn the outcome of the landing, the type of the landing, number of flights with that core, whether gridfins were used, wheter the core is reused, wheter legs were used, the landing pad used, the block of the core which is a number used to seperate version of cores, the number of times this specific core has been reused, and the serial of the core.
<jupyter_code># Takes the dataset and uses the cores column to call the API and append the data to the lists
def getCoreData(data):
for core in data['cores']:
if core['core'] != None:
response = requests.get("https://api.spacexdata.com/v4/cores/"+core['core']).json()
Block.append(response['block'])
ReusedCount.append(response['reuse_count'])
Serial.append(response['serial'])
else:
Block.append(None)
ReusedCount.append(None)
Serial.append(None)
Outcome.append(str(core['landing_success'])+' '+str(core['landing_type']))
Flights.append(core['flight'])
GridFins.append(core['gridfins'])
Reused.append(core['reused'])
Legs.append(core['legs'])
LandingPad.append(core['landpad'])<jupyter_output><empty_output><jupyter_text>Now let's start requesting rocket launch data from SpaceX API with the following URL:
<jupyter_code>spacex_url="https://api.spacexdata.com/v4/launches/past"
response = requests.get(spacex_url)<jupyter_output><empty_output><jupyter_text>Check the content of the response
<jupyter_code>print(response.content)<jupyter_output>b'[{"fairings":{"reused":false,"recovery_attempt":false,"recovered":false,"ships":[]},"links":{"patch":{"small":"https://images2.imgbox.com/3c/0e/T8iJcSN3_o.png","large":"https://images2.imgbox.com/40/e3/GypSkayF_o.png"},"reddit":{"campaign":null,"launch":null,"media":null,"recovery":null},"flickr":{"small":[],"original":[]},"presskit":null,"webcast":"https://www.youtube.com/watch?v=0a_00nJ_Y88","youtube_id":"0a_00nJ_Y88","article":"https://www.space.com/2196-spacex-inaugural-falcon-1-rocket-lost-launch.html","wikipedia":"https://en.wikipedia.org/wiki/DemoSat"},"static_fire_date_utc":"2006-03-17T00:00:00.000Z","static_fire_date_unix":1142553600,"net":false,"window":0,"rocket":"5e9d0d95eda69955f709d1eb","success":false,"failures":[{"time":33,"altitude":null,"reason":"merlin engine failure"}],"details":"Engine failure at 33 seconds and loss of vehicle","crew":[],"ships":[],"capsules":[],"payloads":["5eb0e4b5b6c3bb0006eeb1e1"],"launchpad":"5e9e4502f5090995de566f86","flight_number":1,"name[...]<jupyter_text>You should see the response contains massive information about SpaceX launches. Next, let's try to discover some more relevant information for this project.
### Task 1: Request and parse the SpaceX launch data using the GET request
To make the requested JSON results more consistent, we will use the following static response object for this project:
<jupyter_code>static_json_url='https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/API_call_spacex_api.json'<jupyter_output><empty_output><jupyter_text>We should see that the request was successfull with the 200 status response code
<jupyter_code>response.status_code<jupyter_output><empty_output><jupyter_text>Now we decode the response content as a Json using .json() and turn it into a Pandas dataframe using .json_normalize()
<jupyter_code># Use json_normalize meethod to convert the json result into a dataframe
data = pd.json_normalize(response.json())<jupyter_output><empty_output><jupyter_text>Using the dataframe data print the first 5 rows
<jupyter_code># Get the head of the dataframe
data.head()<jupyter_output><empty_output><jupyter_text>You will notice that a lot of the data are IDs. For example the rocket column has no information about the rocket just an identification number.
We will now use the API again to get information about the launches using the IDs given for each launch. Specifically we will be using columns rocket, payloads, launchpad, and cores.
<jupyter_code># Lets take a subset of our dataframe keeping only the features we want and the flight number, and date_utc.
data = data[['rocket', 'payloads', 'launchpad', 'cores', 'flight_number', 'date_utc']]
# We will remove rows with multiple cores because those are falcon rockets with 2 extra rocket boosters and rows that have multiple payloads in a single rocket.
data = data[data['cores'].map(len)==1]
data = data[data['payloads'].map(len)==1]
# Since payloads and cores are lists of size 1 we will also extract the single value in the list and replace the feature.
data['cores'] = data['cores'].map(lambda x : x[0])
data['payloads'] = data['payloads'].map(lambda x : x[0])
# We also want to convert the date_utc to a datetime datatype and then extracting the date leaving the time
data['date'] = pd.to_datetime(data['date_utc']).dt.date
# Using the date we will restrict the dates of the launches
data = data[data['date'] <= datetime.date(2020, 11, 13)]<jupyter_output><empty_output><jupyter_text>* From the rocket we would like to learn the booster name
* From the payload we would like to learn the mass of the payload and the orbit that it is going to
* From the launchpad we would like to know the name of the launch site being used, the longitude, and the latitude.
* From cores we would like to learn the outcome of the landing, the type of the landing, number of flights with that core, whether gridfins were used, whether the core is reused, whether legs were used, the landing pad used, the block of the core which is a number used to seperate version of cores, the number of times this specific core has been reused, and the serial of the core.
The data from these requests will be stored in lists and will be used to create a new dataframe.
<jupyter_code>#Global variables
BoosterVersion = []
PayloadMass = []
Orbit = []
LaunchSite = []
Outcome = []
Flights = []
GridFins = []
Reused = []
Legs = []
LandingPad = []
Block = []
ReusedCount = []
Serial = []
Longitude = []
Latitude = []<jupyter_output><empty_output><jupyter_text>These functions will apply the outputs globally to the above variables. Let's take a looks at BoosterVersion variable. Before we apply getBoosterVersion the list is empty:
<jupyter_code>BoosterVersion<jupyter_output><empty_output><jupyter_text>Now, let's apply getBoosterVersion function method to get the booster version
<jupyter_code># Call getBoosterVersion
getBoosterVersion(data)<jupyter_output><empty_output><jupyter_text>the list has now been update
<jupyter_code>BoosterVersion[0:5]<jupyter_output><empty_output><jupyter_text>we can apply the rest of the functions here:
<jupyter_code># Call getLaunchSite
getLaunchSite(data)
# Call getPayloadData
getPayloadData(data)
# Call getCoreData
getCoreData(data)<jupyter_output><empty_output><jupyter_text>Finally lets construct our dataset using the data we have obtained. We we combine the columns into a dictionary.
<jupyter_code>launch_dict = {'FlightNumber': list(data['flight_number']),
'Date': list(data['date']),
'BoosterVersion':BoosterVersion,
'PayloadMass':PayloadMass,
'Orbit':Orbit,
'LaunchSite':LaunchSite,
'Outcome':Outcome,
'Flights':Flights,
'GridFins':GridFins,
'Reused':Reused,
'Legs':Legs,
'LandingPad':LandingPad,
'Block':Block,
'ReusedCount':ReusedCount,
'Serial':Serial,
'Longitude': Longitude,
'Latitude': Latitude}
<jupyter_output><empty_output><jupyter_text>Then, we need to create a Pandas data frame from the dictionary launch_dict.
<jupyter_code># Create a data from launch_dict
data2 = pd.DataFrame(launch_dict)<jupyter_output><empty_output><jupyter_text>Show the summary of the dataframe
<jupyter_code># Show the head of the dataframe
data2.head(5)<jupyter_output><empty_output><jupyter_text>### Task 2: Filter the dataframe to only include `Falcon 9` launches
Finally we will remove the Falcon 1 launches keeping only the Falcon 9 launches. Filter the data dataframe using the BoosterVersion column to only keep the Falcon 9 launches. Save the filtered data to a new dataframe called data_falcon9.
<jupyter_code># Hint data['BoosterVersion']!='Falcon 1'
data_falcon9 = data2[data2['BoosterVersion']=='Falcon 9']
data_falcon9<jupyter_output><empty_output><jupyter_text>Now that we have removed some values we should reset the FlgihtNumber column
<jupyter_code>data_falcon9.loc[:,'FlightNumber'] = list(range(1, data_falcon9.shape[0]+1))
data_falcon9<jupyter_output>/opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages/pandas/core/indexing.py:1676: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_single_column(ilocs[0], value, pi)
<jupyter_text>## Data Wrangling
We can see below that some of the rows are missing values in our dataset.
<jupyter_code>data_falcon9.isnull().sum()<jupyter_output><empty_output><jupyter_text>Before we can continue we must deal with these missing values. The LandingPad column will retain None values to represent when landing pads were not used.
### Task 3: Dealing with Missing Values
Calculate below the mean for the PayloadMass using the .mean(). Then use the mean and the .replace() function to replace `np.nan` values in the data with the mean you calculated.
<jupyter_code># Calculate the mean value of PayloadMass column
pm_mean = data_falcon9['PayloadMass'].mean()
# Replace the np.nan values with its mean value
temp = data_falcon9['PayloadMass'].replace(np.nan, pm_mean)
data_falcon9['PayloadMass'] = temp
data_falcon9<jupyter_output><ipython-input-49-915aac30ba1b>:6: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
data_falcon9['PayloadMass'] = temp
<jupyter_text>You should see the number of missing values of the PayLoadMass change to zero.
<jupyter_code>data_falcon9.isnull().sum()<jupyter_output><empty_output><jupyter_text>Now we should have no missing values in our dataset except for in LandingPad.
We can now export it to a CSV for the next section,but to make the answers consistent, in the next lab we will provide data in a pre-selected date range.
<jupyter_code>data_falcon9.to_csv('dataset_part\_1.csv', index=False)
<jupyter_output><empty_output>
|
no_license
|
/Data Collection API.ipynb
|
fauzi417/testrepo
| 26 |
<jupyter_start><jupyter_text>## Train<jupyter_code>data = [('UCLAsource', Transformer(matrix_eig))]
weighters = [('binarW', Transformer(orig))]
normalizers = [('origN', Transformer(orig))]
featurizers = [('Orig', Transformer(orig, collect=['X_orig'])),
('1', Transformer(split_1, collect=['X_low'])),
('20', Transformer(split_20, collect=['X_low'])),
('40', Transformer(split_40, collect=['X_low'])),
('60', Transformer(split_60, collect=['X_low'])),
('80', Transformer(split_80, collect=['X_low'])),
('100', Transformer(split_100, collect=['X_low'])),
('120', Transformer(split_120, collect=['X_low'])),
('140', Transformer(split_140, collect=['X_low'])),
('160', Transformer(split_160, collect=['X_low'])),
('180', Transformer(split_180, collect=['X_low'])),
('200', Transformer(split_200, collect=['X_low'])),
('220', Transformer(split_220, collect=['X_low'])),
('240', Transformer(split_240, collect=['X_low'])),
('260', Transformer(split_260, collect=['X_low']))]
selectors = [('var_threshold', VarianceThreshold())]
scalers = [('minmax', MinMaxScaler())]
classifiers = [('LR', LogisticRegression())]
steps = [('Data', data),
('Weighters', weighters),
('Normalizers', normalizers),
('Featurizers', featurizers),
('Selectors', selectors),
('Scalers', scalers),
('Classifiers', classifiers)]
param_grid = dict(
LR=dict(
C=[0.1*i for i in range(1, 11)],
max_iter=[50, 100, 500],
penalty=['l1']
)
)
banned_combos = []
steps = [('Data', data),
('Weighters', weighters),
('Normalizers', normalizers),
('Featurizers', featurizers),
('Selectors', selectors),
('Scalers', scalers),
('Classifiers', classifiers)]
pipe = Pipeliner(steps, param_grid=param_grid, banned_combos=banned_combos)
pipe.plan_table
pipe.get_results('../Data/dti/', caching_steps=['Data', 'Weighters', 'Normalizers', 'Featurizers'],
scoring=['roc_auc'], collect_n = 100, results_file = 'LR/test_10.csv')
print_boxplot('LR/test_10.csv', "Test on computer")
print_boxplot('LR/test_8.csv', "Test on computer")<jupyter_output><empty_output>
|
no_license
|
/Subsection/LR_comp.ipynb
|
Tismoney/PRNI2016
| 1 |
<jupyter_start><jupyter_text># Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.<jupyter_code>%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data<jupyter_output>mkdir: cannot create directory ‘../data’: File exists
--2019-12-08 11:54:46-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 23.3MB/s in 4.2s
2019-12-08 11:54:51 (18.9 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
<jupyter_text>## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.<jupyter_code>import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))<jupyter_output>IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
<jupyter_text>Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.<jupyter_code>from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))<jupyter_output>IMDb reviews (combined): train = 25000, test = 25000
<jupyter_text>Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.<jupyter_code>print(train_X[100])
print(train_y[100])<jupyter_output>While not as bad as it has been made to be (I have seen MUCH worse), this is still a very lame movie. Basically a rehash of Siegel's "Coogan's Bluff", with the main difference being that Clint Eastwood's hat has more charisma than the whole of Joe Don Baker, an unappealing actor if there was one.<br /><br />However, Venantino Venantini is great (and great fun) as the bad guy, sort of a budget Vittorio Gassman. He is the main reason to sit through this steampile, as the rest of the cast deliver mostly terrible acting, specially the girl. Poor old Rossano Brazzi, hard to believe he was once a romantic lead (watch "Mondo Cane" to see him running away from women). Looking here like a second-tier Ben Gazzara, he's given next to nothing to do. It's all Joe Don's show, unfortunately. And all of it scored to generic 80's "action movie" music that couldn't be more boring.<br /><br />Greydon Clark can make good B-Movies ("Without Warning"), but here he trips, falls, breaks his nose and loses thr[...]<jupyter_text>The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.<jupyter_code>import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words<jupyter_output><empty_output><jupyter_text>The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.<jupyter_code># TODO: Apply review_to_words to a review (train_X[100] or any other review)
print(review_to_words(train_X[100]))<jupyter_output>['bad', 'made', 'seen', 'much', 'wors', 'still', 'lame', 'movi', 'basic', 'rehash', 'siegel', 'coogan', 'bluff', 'main', 'differ', 'clint', 'eastwood', 'hat', 'charisma', 'whole', 'joe', 'baker', 'unapp', 'actor', 'one', 'howev', 'venantino', 'venantini', 'great', 'great', 'fun', 'bad', 'guy', 'sort', 'budget', 'vittorio', 'gassman', 'main', 'reason', 'sit', 'steampil', 'rest', 'cast', 'deliv', 'mostli', 'terribl', 'act', 'special', 'girl', 'poor', 'old', 'rossano', 'brazzi', 'hard', 'believ', 'romant', 'lead', 'watch', 'mondo', 'cane', 'see', 'run', 'away', 'women', 'look', 'like', 'second', 'tier', 'ben', 'gazzara', 'given', 'next', 'noth', 'joe', 'show', 'unfortun', 'score', 'gener', '80', 'action', 'movi', 'music', 'bore', 'greydon', 'clark', 'make', 'good', 'b', 'movi', 'without', 'warn', 'trip', 'fall', 'break', 'nose', 'lose', 'three', 'teeth', 'well', 'least', 'malta', 'locat', 'nice', 'venantini', 'tri', 'save', 'day', '3', '10']
<jupyter_text>**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?**Answer:** Converts to lower case, Splits string into words and Removes stopwordsThe method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.<jupyter_code>import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)<jupyter_output>Read preprocessed data from cache file: preprocessed_data.pkl
<jupyter_text>## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.<jupyter_code>import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for sentence in data:
for word in sentence:
if word in word_count:
word_count[word] += 1
else:
word_count[word] = 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = None
sorted_words = sorted(word_count, key=word_count.get, reverse=True)
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)<jupyter_output><empty_output><jupyter_text>**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?**Answer:** movi,
film,
one,
like,
time.<jupyter_code>count = 0
for word in word_dict:
print(word)
if count == 4:
break
count += 1<jupyter_output>movi
film
one
like
time
<jupyter_text>### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.<jupyter_code>data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)<jupyter_output><empty_output><jupyter_text>### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.<jupyter_code>def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)<jupyter_output><empty_output><jupyter_text>As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?<jupyter_code>len(train_X)<jupyter_output><empty_output><jupyter_text>**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?**Answer:** preprocess data is very computationaly costy. in addition to so, the training set is too large. after padding, each reveiw length is 500 entry. and we have 2500 review. same for convert_and_pad_data## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.<jupyter_code>import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)<jupyter_output><empty_output><jupyter_text>### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.<jupyter_code>import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)<jupyter_output><empty_output><jupyter_text>**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.<jupyter_code>!pygmentize train/model.py<jupyter_output>[34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, embedding_dim, hidden_dim, vocab_size):
[33m"""[39;49;00m
[33m Initialize the model by settingg up the various layers.[39;49;00m
[33m """[39;49;00m
[36msuper[39;49;00m(LSTMClassifier, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=[34m0[39;49;00m)
[36mself[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)
[36mself[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=[34m1[39;49;00m)
[36mself[39;49;00m.sig = nn.Sigm[...]<jupyter_text>The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.<jupyter_code>import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)<jupyter_output><empty_output><jupyter_text>### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.<jupyter_code>def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
out = model.forward(batch_X)
loss = loss_fn(out, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))<jupyter_output><empty_output><jupyter_text>Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.<jupyter_code>import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)<jupyter_output>Epoch: 1, BCELoss: 0.6944114208221436
Epoch: 2, BCELoss: 0.6861721515655518
Epoch: 3, BCELoss: 0.6793698191642761
Epoch: 4, BCELoss: 0.6721140742301941
Epoch: 5, BCELoss: 0.6635747909545898
<jupyter_text>In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.<jupyter_code>from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})<jupyter_output>2019-12-08 11:58:15 Starting - Starting the training job...
2019-12-08 11:58:16 Starting - Launching requested ML instances...
2019-12-08 11:59:11 Starting - Preparing the instances for training......
2019-12-08 12:00:16 Downloading - Downloading input data......
2019-12-08 12:00:50 Training - Downloading the training image...
2019-12-08 12:01:31 Training - Training image download completed. Training in progress.[34mbash: cannot set terminal process group (-1): Inappropriate ioctl for device[0m
[34mbash: no job control in this shell[0m
[34m2019-12-08 12:01:31,977 sagemaker-containers INFO Imported framework sagemaker_pytorch_container.training[0m
[34m2019-12-08 12:01:32,001 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.[0m
[34m2019-12-08 12:01:32,622 sagemaker_pytorch_container.training INFO Invoking user training script.[0m
[34m2019-12-08 12:01:32,857 sagemaker-containers INFO Module train does not provide a setup.py. [0[...]<jupyter_text>## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model.<jupyter_code># TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')<jupyter_output>-------------------------------------------------------------------------------------!<jupyter_text>## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.<jupyter_code>test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)<jupyter_output><empty_output><jupyter_text>**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?**Answer:** Xgboost is good for tabular data with a small number of variables, whereas neural nets based deep learning is good for images or data with a large number of variables. Both methods are great in their own rights and are well respected. here, we are dealing with tabular data whose size is big. neural network accuracy is reasonable. However, due to the tabular nature of the data, Xgboost results are neary the same as the NN.### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.<jupyter_code>test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'<jupyter_output><empty_output><jupyter_text>The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.<jupyter_code># TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data = None
test_data_review_to_words = review_to_words(test_review)
test_data = [np.array(convert_and_pad(word_dict, test_data_review_to_words)[0])]<jupyter_output><empty_output><jupyter_text>Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.<jupyter_code>predictor.predict(test_data)<jupyter_output><empty_output><jupyter_text>Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.<jupyter_code>estimator.delete_endpoint()<jupyter_output><empty_output><jupyter_text>## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided.<jupyter_code>!pygmentize serve/predict.py<jupyter_output>[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpickle[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36msagemaker_containers[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.optim[39;49;00m [34mas[39;49;00m [04m[36moptim[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.utils.data[39;49;00m
[34mfrom[39;49;00m [04m[36mmodel[39;49;00m [34mimport[39;49;00m LSTMClassifier
[34mfrom[39;49;00m [04m[36mutils[39;49;00m [34mimport[39;49;00m review_to_words, [...]<jupyter_text>As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.<jupyter_code>from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')<jupyter_output>-------------------------------------------------------------------------------------!<jupyter_text>### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.<jupyter_code>import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)<jupyter_output><empty_output><jupyter_text>As an additional test, we can try sending the `test_review` that we looked at earlier.<jupyter_code>predictor.predict(test_review)<jupyter_output><empty_output><jupyter_text>Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.<jupyter_code>predictor.endpoint<jupyter_output><empty_output><jupyter_text>Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission.Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?**Answer:** The Positive example : "The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm."
The negaticve Example: "I hated the movie. I didn't like the actors at all. the story was lame. I couldn't figure any pattern in the story line"### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.<jupyter_code>predictor.delete_endpoint()<jupyter_output><empty_output>
|
no_license
|
/SageMaker Project.ipynb
|
anasserm/Building-and-Deployment-Sentiment-Analysis-Model
| 31 |
<jupyter_start><jupyter_text>## Modelling New data against vulnerability threshold
#### Use of upsampling in training set<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
from sklearn.linear_model import LogisticRegression
import statsmodels.api as sm
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import f_classif, chi2, mutual_info_classif, SelectKBest
import pprint
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score, roc_curve, auc
data = pd.read_csv('new_data.csv')
data.shape
data.head()
x = data.drop(columns=['Unnamed: 0', 'process_days', 'process_day_>=1', 'Vulnerability Threshold'])
y = data['Vulnerability Threshold']
var = y.name
data['Vulnerability Threshold'].value_counts()
from sklearn.utils import resample
def upsample(x,y):
# Find target name
var= y.name
df = pd.concat([x, y], axis=1)
# Find minority & majority class
if df[df[var]==0].shape[0] > df[df[var]==1].shape[0]:
df_majority = df[df[var]==0]
df_minority = df[df[var]==1]
else:
df_majority = df[df[var]==1]
df_minority = df[df[var]==0]
# Upsample minority class
df_minority_upsampled = resample(df_minority,
replace=True, # sample with replacement
n_samples=df_majority.shape[0], # to match majority class
random_state=123) # reproducible results
# Combine majority class with upsampled minority class
df_upsampled = pd.concat([df_majority, df_minority_upsampled])
# Display new class counts
print(df_upsampled[var].value_counts())
new_x = df_upsampled.drop(var,axis=1)
new_y = df_upsampled[var]
return new_x, new_y
def downsample(x,y):
# Find target name!
var= y.name
df = pd.concat([x, y], axis=1)
# Find minority & majority class
if df[df[var]==0].shape[0] > df[df[var]==1].shape[0]:
df_majority = df[df[var]==0]
df_minority = df[df[var]==1]
else:
df_majority = df[df[var]==1]
df_minority = df[df[var]==0]
# if size == None:
# n = df_minority.shape[0]
# else: n = size
# Upsample minority class
df_majority_downsampled = resample(df_majority,
replace=True, # sample with replacement
n_samples=df_minority.shape[0] , # to match majority class
random_state=123) # reproducible results
# Combine majority class with upsampled minority class
df_downsampled = pd.concat([df_minority, df_majority_downsampled])
# Display new class counts
print(df_downsampled[var].value_counts())
new_x = df_downsampled.drop(var,axis=1)
new_y = df_downsampled[var]
return new_x, new_y<jupyter_output><empty_output><jupyter_text>### Training<jupyter_code>x_train, x_test, y_train, y_test = train_test_split(x,
y,
test_size=0.3,
random_state=123)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
x_train, y_train = upsample(x_train, y_train)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
def get_feat(method, k, x, y):
select = SelectKBest(method, k)
_ = select.fit_transform(x, y)
return [x.columns[i] for i, val in enumerate(select.get_support()) if val]
num_features = 25
chi2_select = get_feat(chi2, num_features, x_train, y_train)
f_select = get_feat(f_classif, num_features, x_train, y_train)
all_feat = x.columns
for i in [all_feat,chi2_select,f_select]:
model = LogisticRegression()
model.fit(x_train[i],y_train)
print("test accuracy:", model.score(x_test[i],y_test))
y_pred = model.predict(x_test[i])
cm = confusion_matrix(y_test, y_pred,labels=[1, 0])
tpr = cm[0][0]/np.sum(cm[0])
tnr = cm[1][1]/np.sum(cm[1])
print(cm)
print("True positive rate:", tpr)
print("True negative rate:", tnr)
pprint.pprint(sorted(list(zip(model.coef_[0],x[i].columns)), reverse=True, key=lambda t: abs(t[0])))
print("\n"+"-"*100)
for i in [all_feat]:
model = RandomForestClassifier(max_depth=10,n_estimators = 64,criterion='gini',random_state=123)
model.fit(x_train[i],y_train)
print("test accuracy:", model.score(x_test[i],y_test))
y_pred = model.predict(x_test[i])
cm = confusion_matrix(y_test, y_pred,labels=[1, 0])
tpr = cm[0][0]/np.sum(cm[0])
tnr = cm[1][1]/np.sum(cm[1])
print(cm)
print("True positive rate:", tpr)
print("True negative rate:", tnr)
pprint.pprint(sorted(list(zip(model.feature_importances_,x[i].columns)), reverse=True))
x = data.drop(columns=['Unnamed: 0', 'process_days', 'process_day_>=1', 'Vulnerability Threshold'])
y = data['process_day_>=1']
var = y.name
data['process_day_>=1'].value_counts()
from sklearn.utils import resample
def upsample(x,y):
# Find target name
var= y.name
df = pd.concat([x, y], axis=1)
# Find minority & majority class
if df[df[var]==0].shape[0] > df[df[var]==1].shape[0]:
df_majority = df[df[var]==0]
df_minority = df[df[var]==1]
else:
df_majority = df[df[var]==1]
df_minority = df[df[var]==0]
# Upsample minority class
df_minority_upsampled = resample(df_minority,
replace=True, # sample with replacement
n_samples=df_majority.shape[0], # to match majority class
random_state=123) # reproducible results
# Combine majority class with upsampled minority class
df_upsampled = pd.concat([df_majority, df_minority_upsampled])
# Display new class counts
print(df_upsampled[var].value_counts())
new_x = df_upsampled.drop(var,axis=1)
new_y = df_upsampled[var]
return new_x, new_y
def downsample(x,y):
# Find target name!
var= y.name
df = pd.concat([x, y], axis=1)
# Find minority & majority class
if df[df[var]==0].shape[0] > df[df[var]==1].shape[0]:
df_majority = df[df[var]==0]
df_minority = df[df[var]==1]
else:
df_majority = df[df[var]==1]
df_minority = df[df[var]==0]
# if size == None:
# n = df_minority.shape[0]
# else: n = size
# Upsample minority class
df_majority_downsampled = resample(df_majority,
replace=True, # sample with replacement
n_samples=df_minority.shape[0] , # to match majority class
random_state=123) # reproducible results
# Combine majority class with upsampled minority class
df_downsampled = pd.concat([df_minority, df_majority_downsampled])
# Display new class counts
print(df_downsampled[var].value_counts())
new_x = df_downsampled.drop(var,axis=1)
new_y = df_downsampled[var]
return new_x, new_y<jupyter_output><empty_output><jupyter_text>### Training<jupyter_code>x_train, x_test, y_train, y_test = train_test_split(x,
y,
test_size=0.3,
random_state=123)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
x_train, y_train = upsample(x_train, y_train)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
def get_feat(method, k, x, y):
select = SelectKBest(method, k)
_ = select.fit_transform(x, y)
return [x.columns[i] for i, val in enumerate(select.get_support()) if val]
num_features = 8
chi2_select = get_feat(chi2, num_features, x_train, y_train)
f_select = get_feat(f_classif, num_features, x_train, y_train)
all_feat = x.columns
for i in [chi2_select,f_select,all_feat]:
model = LogisticRegression()
model.fit(x_train[i],y_train)
print("test accuracy:", model.score(x_test[i],y_test))
y_pred = model.predict(x_test[i])
cm = confusion_matrix(y_test, y_pred,labels=[1, 0])
tpr = cm[0][0]/np.sum(cm[0])
tnr = cm[1][1]/np.sum(cm[1])
print(cm)
print("True positive rate:", tpr)
print("True negative rate:", tnr)
pprint.pprint(sorted(list(zip(model.coef_[0],x[i].columns)), reverse=True, key=lambda t: abs(t[0])))
print("\n"+"-"*100)
for i in [all_feat]:
model = RandomForestClassifier(max_depth=10,n_estimators = 64,criterion='gini',random_state=123)
model.fit(x_train[i],y_train)
print("test accuracy:", model.score(x_test[i],y_test))
y_pred = model.predict(x_test[i])
cm = confusion_matrix(y_test, y_pred,labels=[1, 0])
tpr = cm[0][0]/np.sum(cm[0])
tnr = cm[1][1]/np.sum(cm[1])
print(cm)
print("True positive rate:", tpr)
print("True negative rate:", tnr)
pprint.pprint(sorted(list(zip(model.feature_importances_,x[i].columns)), reverse=True))
44/2177<jupyter_output><empty_output>
|
no_license
|
/derived data/june_model.ipynb
|
xiaoxiang-ma/Marhub-master
| 3 |
<jupyter_start><jupyter_text># Word Vectors
This is a small demo notebook to give you a feel for what word vector are and why they are useful. First, we will visualize the word vectors that you trained using the architectures you built. Then we will look at the GLoVe embeddings to see what the state-of-the-art has to offer.---------------## Your Trained Embeddings<jupyter_code># imports
import numpy as np
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
from utils import load_data<jupyter_output><empty_output><jupyter_text>### Loading Vocabulary<jupyter_code>vocab, _, _ = load_data(preprocess=False)
print("Total Vocabulary length:", len(vocab))
print()
print(vocab)<jupyter_output>Total Vocabulary length: 250
['!', "'", "''", "'d", "'ll", "'m", "'re", "'s", "'ve", ',', '.', '...', '?', '``', 'a', 'about', 'actually', 'again', 'ah', 'all', 'am', 'an', 'and', 'any', 'anything', 'are', 'as', 'at', 'baby', 'back', 'bad', 'be', 'because', 'been', 'before', 'believe', 'best', 'better', 'big', 'but', 'by', 'ca', 'call', 'can', 'chandler', 'come', 'could', 'd', 'day', 'did', 'didn', 'do', 'does', 'doing', 'don', 'down', 'even', 'ever', 'everything', 'feel', 'fine', 'first', 'for', 'friend', 'from', 'fun', 'get', 'getting', 'girl', 'give', 'go', 'god', 'going', 'gon', 'good', 'got', 'great', 'guess', 'guy', 'guys', 'had', 'happened', 'has', 'have', 'having', 'he', 'help', 'her', 'here', 'hey', 'hi', 'him', 'his', 'honey', 'how', 'huh', 'i', 'i-i', 'if', 'in', 'into', 'is', 'it', 'joey', 'just', 'kinda', 'know', 'last', 'let', 'like', 'listen', 'little', 'll', 'look', 'lot', 'love', 'm', 'made', 'make', 'man', 'married', 'maybe', 'me', 'mean', 'minute', 'monica', 'more',[...]<jupyter_text>### t-distributed Stochastic Neighbor Embedding (TSNE) Plot
This is a very popular dimension reduction technique in order to visualize high-dimensional embeddings. It uses PCA along with other methods in order to preserve relative distance between the actual word vectors and its 2D projection.
**Note**: This projection is not deterministic. If you run it multiple times you will get different results. Sometimes you need to run it a few times in order to get a good projection.<jupyter_code>def plot_tsne(word_vectors, vocab, n=None):
# pick a random sample of all word vectors if there are too many
index = np.arange(len(vocab))
if not n is None:
if n > len(vocab):
raise ValueError(f"'n' ({n}) > the length of the vocabulary ({len(vocab)})")
index = np.random.choice(index, size=n, replace=False)
word_vectors = word_vectors[index, :]
# compute tsne
tsne = TSNE(n_components=2).fit_transform(word_vectors)
x = tsne[:,0]
y = tsne[:,1]
# plot
fig, ax = plt.subplots(figsize=(15,15), facecolor='white')
plt.scatter(x, y, s=50, c='c')
for i, idx in enumerate(index):
word = vocab[idx]
ax.annotate(word, (x[i], y[i]), fontsize=12)<jupyter_output><empty_output><jupyter_text>### Baseline
Just to show what it looks like when word vectors have learned nothing, let's see the TSNE plot for onehot encoded word vectors.<jupyter_code>onehot = torch.zeros(len(vocab), len(vocab))
for i in range(onehot.shape[0]):
onehot[i][i] = 1
plot_tsne(onehot, [""]*len(vocab))<jupyter_output><empty_output><jupyter_text>### Continuous Bag of Words Model<jupyter_code>cbow_word_vectors = np.loadtxt("cbow_word_vectors.csv")
print("embedding size =", cbow_word_vectors.shape[1])
plot_tsne(cbow_word_vectors, vocab)<jupyter_output><empty_output><jupyter_text>### Bengio's Model<jupyter_code>bengio_word_vectors = np.loadtxt("bengio_word_vectors.csv")
plot_tsne(bengio_word_vectors, vocab)
print("embedding size =", bengio_word_vectors.shape[1])<jupyter_output>embedding size = 100
<jupyter_text>### Analysis
What do you notice about words that are close together?-----------## GloVe Embeddings
GloVe stands for **Global Vectors** and is currently the most popular word vector model. It uses a slightly different idea and method in order to obtain word vectors. Mainly, it is trying to account for the **co-occurence** between words. This is a global view rather than a local view of the relationship between words. This information is stored in the **co-oocurence matrix** denoted $\boldsymbol{X}$. Each entry corresponds to the number of times each word appears nearby (say less than 5 positions apart). It uses the following cost function to train
$$
\mathcal{J}(\boldsymbol{R}) = \sum_{i, j} f(x_{ij}) \left( \boldsymbol{r}^T_i \ \boldsymbol{\tilde{r}}_j + \boldsymbol{b}_i + \boldsymbol{\tilde{b}}_j - \log{x_{ij}} \right)^2
$$
$$
f(x_{ij}) = \begin{cases}
\left( \frac{x_{ij}}{100} \right)^{\frac{3}{4}} \qquad &\text{if} \quad x_{ij} < 100 \\
1 \qquad &\text{if} \quad x_{ij} \geq 100
\end{cases}
$$<jupyter_code>import torch
# pip install torchtext
import torchtext
# The first time you run this will download a ~823MB file
glove = torchtext.vocab.GloVe(name="6B", # trained on Wikipedia 2014 corpus
dim=50) # embedding size = 50<jupyter_output><empty_output><jupyter_text>### Comparision to Your Trained Embeddings<jupyter_code>def glove_tsne_from_word_list(vocab):
word_vectors = []
for token in vocab:
try:
word_vectors.append(list(glove[token]))
except:
# the GloVe embeddings might not contain some of the tokens
# since they were specific to the TV show Friends
pass
word_vectors = torch.tensor(word_vectors)
plot_tsne(word_vectors, vocab)
glove_tsne_from_word_list(vocab)<jupyter_output><empty_output><jupyter_text>What do you notice about the words that are close together? Is this the same pattern from your embeddings?### Deeper Dive into Word Vectors
Now that we have visualized word vectors on a high level, let's look at them in more detail. What actually is a word vector?<jupyter_code>glove['apple']<jupyter_output><empty_output><jupyter_text>As you can see, it's just a vector of numbers. On it's own these numbers don't really mean anything, but when combined with other word vectors they encode both syntatic and sematical information about the word.
More specifically, we have the **Distributional Hypothesis**: words with similar distributions have similar meanings, i.e. the closer together two word vectors are, the more similar their meaning.
There are a few ways we can define distance. The first is **Euclidean Distance** and the second is the **Cosine Distance**. What is the difference between the two and which is a better measure of distance between word vectors?<jupyter_code>def Euclidean_distance(x, y, dim=0):
return torch.norm(x - y, dim=dim)
def Cosine_similarity(x, y, dim=0):
return torch.nn.CosineSimilarity(dim=dim)(x, y)
#return (x y) / (torch.norm(x, dim=dim) * torch.norm(y, dim=dim))
def Cosine_distance(x, y, dim=0):
return 1 - Cosine_similarity(x, y, dim=dim)<jupyter_output><empty_output><jupyter_text>$$
\texttt{Euclidean-distance}(\boldsymbol{x}, \boldsymbol{y}) = \Vert \boldsymbol{x} - \boldsymbol{y} \ \Vert_2
$$
$$
\texttt{Cosine-similarity}(\boldsymbol{x}, \boldsymbol{y}) = \frac{ \boldsymbol{x} \cdot \boldsymbol{y}}{\Vert \boldsymbol{x} \ \Vert_2 \ \Vert \boldsymbol{y} \ \Vert_2} =
$$
$$
\texttt{Cosine-distance}(\boldsymbol{x}, \boldsymbol{y}) = 1 - \texttt{Cosine-similarity}(\boldsymbol{x}, \boldsymbol{y}) = \texttt{Euclidean-distance} \left( \frac{\boldsymbol{x}}{\Vert \boldsymbol{x} \ \Vert_2}, \frac{\boldsymbol{y}}{\Vert \boldsymbol{y} \ \Vert_2} \right)
$$<jupyter_code>def get_closest_words(vec, n=5, distance=Cosine_distance):
dists = distance(glove.vectors, vec.unsqueeze(0), dim=1)
dists = sorted(enumerate(dists.numpy()), key=lambda x: x[1])
top = dists[1:n+1]
for i, (idx, difference) in enumerate(top):
print(f"{i+1}) {glove.itos[idx]:15s} {difference:5.2f}")
#return dists[1:n+1]<jupyter_output><empty_output><jupyter_text>We can play around with word vectors and test the distributional hypothesis.<jupyter_code>get_closest_words(glove['nurse'])
get_closest_words(glove['anxiety'])<jupyter_output>1) persistent 0.21
2) experiencing 0.21
3) discomfort 0.21
4) nervousness 0.22
5) headaches 0.22
<jupyter_text>So far so good. Now we can spice things up a bit by doing vector operations.<jupyter_code>get_closest_words(glove['king'] - glove['man'] + glove['woman'])
get_closest_words(glove['queen'] - glove['woman'] + glove['man'])<jupyter_output>1) king 0.14
2) prince 0.19
3) crown 0.22
4) coronation 0.25
5) royal 0.25
<jupyter_text>It's pretty wild that this just works out. Now, we can also show that the word vector from the GloVe embeddings are somewhat biased. This is due to the fact that the corpus's it was trained on were biased due to social norms.<jupyter_code>get_closest_words(glove['doctor'] - glove['man'] + glove['woman'])
get_closest_words(glove['doctor'] - glove['woman'] + glove['man'])
get_closest_words(glove['programmer'] - glove['man'] + glove['woman'])
get_closest_words(glove['programmer'] - glove['woman'] + glove['man'])<jupyter_output>1) programmers 0.36
2) software 0.37
3) setup 0.38
4) backup 0.39
5) innovator 0.40
<jupyter_text>This seem quite weird. How is it that our word vectors allow "king" - "man" + "woman" = "queen"? This phenomenon is called **word analogies** and it occurs because of the distributive hypothesis. Take the relationship between capitals and cities for example. It turns out that the mapping between capitcals and cities roughly correspond to a single direction in the high dimension embedding space that the word vectors live in.<jupyter_code>countries = ["china", "france"]
capitals = ["beijing", "paris"]
glove_tsne_from_word_list(countries + capitals)<jupyter_output><empty_output>
|
no_license
|
/A6 - Introduction to NLP/Complete/word_vectors.ipynb
|
ekeilty17/Intro-to-NN-Assigments
| 15 |
<jupyter_start><jupyter_text>. | .
-- | --
 | 
ASTG Python Courses
---
An Introduction to netCDF4 Python
<jupyter_code>from __future__ import print_function<jupyter_output><empty_output><jupyter_text># Useful References * Create and read netCDF files
* netCDF4 module## Scientific Data
* N‐dimensional arrays and metadata:
* Measurements at specific time, location,condition
– Physics: temperature, pressure
– Chemistry: reaction speed
– Biology: type (species, cell types, nucleotides)
– Economics: price
– Algorithmics: program time and space
– Networking: network activity
– Robotics: movements
**Requirements**
+ Compact storage: compression
+ Fast I/O: parallel, partial, random access
+ Portability: transporting data between computers
+ Tools for manipulating data: reorganizing, aggregating, subsetting, converting,visualizing
+ Easy API in many languages: C, C++, Fortran, Java, Matlab, Perl, Python, R, ...## What We Will Cover
* Opening a file
* Dimension
* Variables
* Attributes
* Writing data
* Creating groups
* Reading data## What is netCDF?**Overview**
* The Network Common Data Form, or netCDF, is an interface to a library of data access functions for storing and retrieving data in the form of arrays.
* NetCDF is an abstraction that supports a view of data as a collection of self-describing, portable objects that can be accessed through a simple interface.
* All operations to access and manipulate data in a netCDF dataset must use only the set of functions provided by the interface.
* Array values may be accessed directly, without knowing details of how the data are stored.
* NetCDF supports efficient access to small subsets of large datasets.**Portability**
* The netCDF library is supported for various Linux/UNIX operating systems as well as MS Windows.
* APIs written for Fortran 77/90, C, C++, Java**Conventions**
* The mere use of netCDF is not sufficient to make data "self-describing" and meaningful to both humans and machines.
* By using a set of conventions, a data producer is more likely to produce files that can be easily shared within the research community, and that contain enough details to be useful as a long-term archive.
* The names of variables and dimensions should be meaningful and conform to any relevant conventions.
* It is important to use all the relevant standard attributes using the relevant conventions.
## What is netCDF4 Python?
* Python interface to the netCDF version 4 library.
* **Can read and write files in both the new netCDF 4 and the netCDF 3 formats**.
* Can create files that are readable by HDF5 utilities.
* Relies on NumPy arrays.**Uncomment the cell below if you are in Google Colab**<jupyter_code>#!pip install netCDF4<jupyter_output><empty_output><jupyter_text>-----<jupyter_code>import numpy as np
from netCDF4 import Dataset<jupyter_output><empty_output><jupyter_text>#### Opening a netCDF File<jupyter_code>ncFileName = 'sample_netcdf.nc4'
modeType = 'w'
fileFormat = 'NETCDF4'
ncfid = Dataset(ncFileName, mode=modeType, format=fileFormat)<jupyter_output><empty_output><jupyter_text>`modeType` has the options:
* 'w': to create a new file
* 'r+': to read and write with an existing file
* 'r': to read (only) an existing file
* 'a': to append to existing file
`fileFormat` has the options:
* 'NETCDF3_CLASSIC': Original netCDF format
* 'NETCDF3_64BIT_OFFSET': Used to ease the size restrictions of netCDF classic files
* 'NETCDF4_CLASSIC'
* 'NETCDF4': Offer new features such as groups, compound types, variable length arrays, new unsigned integer types, parallel I/O access, etc.
* 'NETCDF3_64BIT_DATA'#### Creating Dimensions in a netCDF File
* Use the method `createDimension`
* It typically takes as arguments a string (name of the dimension) and an integer (dimension size)
* For unlimited dimensions, use `None` as size.
* Unlimited size dimensions must be declared before (“to the left of”) other dimensions.<jupyter_code>time = ncfid.createDimension('time', None)
lev = ncfid.createDimension('lev', 72)
lat = ncfid.createDimension('lat', 91)
lon = ncfid.createDimension('lon', 144)<jupyter_output><empty_output><jupyter_text>#### Create Variables
* Use the method ``createVariable``
* The first three arguments are the variable name, the variable type, a tuple of previously defined dimension names.
* If you need data compression, add the `zlib=True` argument.**Dimension Variables**<jupyter_code>times = ncfid.createVariable('time','f8',('time',))
levels = ncfid.createVariable('lev','i4',('lev',))
latitudes = ncfid.createVariable('lat','f4',('lat',))
longitudes = ncfid.createVariable('lon','f4',('lon',))<jupyter_output><empty_output><jupyter_text>**Regular Variables**<jupyter_code>temp = ncfid.createVariable('temp','f4', \
('time','lev','lat','lon',))<jupyter_output><empty_output><jupyter_text>#### Adding Variable Attributes
* Attributes allow us to capture metadata that would otherwise be separated from the data.
* Variable attributes are added on individual variables in the dataset.
* All variables should have values for the following attributes unless there is a very good reason not to:
o units
o long_name
You may also consider attributes such as:
* valid_min: Smallest valid value of a variable.
* valid_max: Largest valid value of a variable.
* _ FillValue: The value that a variable gets filled with before any data is loaded into it.
* standard_name: A name used to identify the physical quantity. A standard name contains no whitespace and is case sensitive. <jupyter_code>latitudes.long_name = 'latitude'
latitudes.units = 'degrees north'
longitudes.long_name = 'longitude'
longitudes.units = 'degrees east'
levels.long_name = 'vertical levels'
levels.units = 'hPa'
levels.positive = 'down'
times.long_name = 'time'
times.units = 'hours since 2019-12-19 11:59:43.0'
times.calendar = 'gregorian'
temp.long_name = 'temperature'
temp.units = 'K'
#temp._FillValue = 1.0e15
temp.valid_min = 200.
temp.valid_max = 350.
temp.standard_name = 'atmospheric_temperature'<jupyter_output><empty_output><jupyter_text>#### Adding Global Attributes
* Global attributes are on the dataset.
* We use directly the file identifier to add them.<jupyter_code>import time
ncfid.description = 'Sample netCDF file'
ncfid.institution = 'NASA GSFC'
ncfid.history = 'File created on' + time.ctime(time.time())
ncfid.source = 'netCDF4 python tutorial'<jupyter_output><empty_output><jupyter_text>#### Writing Data in the File<jupyter_code>latitudes[:] = np.arange(-90,91,2.0)
longitudes[:] = np.arange(-180,180,2.5)
levels[:] = np.arange(0,72,1)
out_frequency = 3 # ouput frequency in hours
num_records = 5
for i in range(num_records):
times[i] = i*out_frequency
temp[i,:,:,:] = np.random.uniform(size=(levels.size,
latitudes.size,
longitudes.size))<jupyter_output><empty_output><jupyter_text>#### Printing Dimension Information<jupyter_code># List all the dimension information
for dim in ncfid.dimensions.values():
print(dim, dim.isunlimited())
# Get the list of dimension name and retrieve info for each dimension
for name in ncfid.dimensions.keys():
dim = ncfid.variables[name]
print(name, dim.dtype, dim.size)<jupyter_output><empty_output><jupyter_text>#### Printing File Attributes<jupyter_code># Get the global file attributes
for att in ncfid.ncattrs():
print(att+':', getattr(ncfid,att))
# Global attributes as a dictionary
print(ncfid.__dict__)<jupyter_output><empty_output><jupyter_text>#### Printing Variable Information<jupyter_code># List variable information but exclude dimensions
for name in ncfid.variables.keys():
if (name not in ncfid.dimensions.keys()):
data = ncfid.variables[name]
print(name, data.units, data.shape, data.dtype, data.dimensions)
def print_ncattr(ncfid, key):
"""
Prints the NetCDF file attributes for a given key
Parameters:
* ncfid: netCDF file identifier
* key: unicode (a valid netCDF4.Dataset.variables key)
"""
try:
print(key, '-->')
print("\t\ttype:", repr(ncfid.variables[key].dtype))
for ncattr in ncfid.variables[key].ncattrs():
print('\t\t%s:' % ncattr,\
repr(ncfid.variables[key].getncattr(ncattr)))
except KeyError:
print("\t\tWARNING: %s does not contain variable attributes" % key)
print(print_ncattr.__doc__)
for name in ncfid.variables.keys():
print_ncattr(ncfid, name)<jupyter_output><empty_output><jupyter_text>#### Create Groups
* We can organize data in hierarchical groups, which are analogous to directories in a filesystem.
* Groups serve as containers for variables, dimensions and attributes, as well as other groups.
* Use the method `createGroup` to create groups.<jupyter_code>fcstgrp = ncfid.createGroup('forecasts')
fcstgrpm = ncfid.createGroup('forecasts/model')
# List the groups
print(ncfid.groups)
def walk_group_tree(top):
"""
Python generator that is used to walk the directory tree.
"""
values = top.groups.values()
yield values
for value in top.groups.values():
for children in walk_group_tree(value):
yield children
# List of the created groups in the dataset
for children in walk_group_tree(ncfid):
for child in children:
print(child)<jupyter_output><empty_output><jupyter_text>##### Add variable to a group<jupyter_code>tempm = ncfid.createVariable('forecasts/model/temp','f4', \
('time','lev','lat','lon',))
tempm.long_name = 'temperature (model)'
tempm.units = 'K'
#tempm._FillValue = 1.0e15
tempm.valid_min = 200.
tempm.valid_max = 350.
tempm.standard_name = 'atmospheric_temperature'
tempm[0:num_records,:,:,:] = np.random.uniform(size=(num_records,
levels.size,
latitudes.size,
longitudes.size))
print(ncfid["forecasts/model"])
print(ncfid["forecasts/model/temp"])<jupyter_output><empty_output><jupyter_text>#### Close the file<jupyter_code>ncfid.close()<jupyter_output><empty_output><jupyter_text>### Reading a netCDF File<jupyter_code>with Dataset(ncFileName, mode='r') as ncfid:
time = ncfid.variables['time'][:]
lev = ncfid.variables['lev'][:]
lat = ncfid.variables['lat'][:]
lon = ncfid.variables['lon'][:]
temp = ncfid.variables['temp'][:]
grpid1 = ncfid.groups['forecasts']
grpid2 = grpid1.groups['model']
tempm = grpid2.variables['temp'][:]
# Print variable information
print_ncattr(ncfid, 'time')
print_ncattr(ncfid, 'temp')
print_ncattr(grpid2, 'temp')
print("Time: ", time)
print("Longitude values: ", lon)
print("Latitude values: ", lat)
print("Temperature info: ")
print(temp.shape)
print(np.min(temp), np.max(temp))
print("Temperature (model) info: ")
print(tempm.shape)
print(np.min(tempm), np.max(tempm))<jupyter_output><empty_output><jupyter_text>### Updating a Variable in an Existing netCDF File<jupyter_code>with Dataset(ncFileName, mode='a') as ncfid:
temp = ncfid.variables['temp'][:]
data = temp[:]
data = 1.1*data + 100.0
temp[:] = data
print(temp.shape)
print(np.min(temp), np.max(temp))<jupyter_output><empty_output><jupyter_text>## ExampleWe want to use the netCDF file:
https://www.unidata.ucar.edu/software/netcdf/examples/sresa1b_ncar_ccsm3-example.nc
to plot the surface air temperature (variable `tas`) and the zonal mean height of the wind (variable `ua`).
The metadata of the file is located at:
https://www.unidata.ucar.edu/software/netcdf/examples/sresa1b_ncar_ccsm3-example.cdl<jupyter_code># Get the remote file
nc_file = "sresa1b_ncar_ccsm3-example.nc"
url = "https://www.unidata.ucar.edu/software/netcdf/examples/"
import urllib.request
urllib.request.urlretrieve(url+nc_file, nc_file)
# Open the netCDF file and read surface air temperature
with Dataset(nc_file,'r') as ncid:
lons = ncid.variables['lon'][:] # longitude grid points
lats = ncid.variables['lat'][:] # latitude grid points
levs = ncid.variables['plev'][:] # pressure leves
surf_temp = ncid.variables['tas'][:]
uwind = ncid.variables['ua'][:]
print("Shape of lons: ", np.shape(lons), lons[0], lons[-1])
print("Shape of lats: ", np.shape(lats), lats[0], lats[-1])
print("Shape of levs: ", np.shape(levs), levs[0], levs[-1])
print("Shape of surf_temp: ", np.shape(surf_temp))
print("Shape of uwind: ", np.shape(uwind))<jupyter_output><empty_output><jupyter_text>**Load Plotting Modules**** Uncomment the cell below if you are on Google Colab**<jupyter_code>#!apt-get install libproj-dev proj-data proj-bin
#!apt-get install libgeos-dev
#!pip install cython
#!pip install cartopy<jupyter_output><empty_output><jupyter_text>---<jupyter_code>%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import cm
import cartopy
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter<jupyter_output><empty_output><jupyter_text>**Plot of Surface Temperature**<jupyter_code>plt.figure(figsize=(9, 5))
map_projection = ccrs.PlateCarree()
ax = plt.axes(projection=map_projection)
im = ax.contourf(lons, lats, surf_temp[0,:,:], transform=map_projection)
ax.coastlines()
ax.set_xticks(np.linspace(-180, 180, 5), crs=map_projection)
ax.set_yticks(np.linspace(-90, 90, 5), crs=map_projection)
lon_formatter = LongitudeFormatter(zero_direction_label=True)
lat_formatter = LatitudeFormatter()
ax.xaxis.set_major_formatter(lon_formatter)
ax.yaxis.set_major_formatter(lat_formatter)
plt.colorbar(im, orientation='vertical')
ax.set_global()
plt.show()<jupyter_output><empty_output><jupyter_text>**Plot Zonal Mean Height of Wind**<jupyter_code># Compute the zonal mean height
zonal_mean_height = np.mean(uwind[0,:,:,:], axis=2)
# Select contour levels
ncountours = 10
fac = 0.005
min_val = (1.0-fac)*np.min(zonal_mean_height)
max_val = (1.0+fac)*np.max(zonal_mean_height)
clevs = np.linspace(min_val, max_val, ncountours)
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# map contour values to colors
norm=matplotlib.colors.BoundaryNorm(clevs, ncolors=256, clip=False)
# draw the contours with contour labels
CS = ax.contour(lats, levs, zonal_mean_height, levels=clevs)
ax.clabel(CS,inline=1, fontsize=10, colors='black')
# draw the (filled) contours
contour = ax.contourf(lats, levs, zonal_mean_height, levels=clevs, norm=norm)
# Draw colorbar
fmt = matplotlib.ticker.FormatStrFormatter("%3.2g")
cbar = fig.colorbar(contour, ax=ax, orientation='horizontal', shrink=0.8,
ticks=clevs, format=fmt)
cbar.set_label('m s-1')
ax.set_yscale('log')
ax.set_xlabel("Latitude (degrees)")
ax.set_ylabel("Pressure (Pa)")
plt.show()<jupyter_output><empty_output>
|
no_license
|
/science_data_format/.ipynb_checkpoints/introduction_netcdf4-checkpoint.ipynb
|
abheeralisf/py_materials
| 23 |
<jupyter_start><jupyter_text>### Inspeccion de modelos:
**El modelo a analizar fue entrenado usando regresion logistica y la union
de dos clasificadores, uno que usó "word" como _analyzer_ y el otro uso "char".
Veamos cuales son los 10 features con mas peso positivo y mas peso negativo para cada clase:**<jupyter_code>vect = pipeline.named_steps['feats']
clf = pipeline.named_steps['clf']
from sentiment.analysis import print_maxent_features
print_maxent_features(vect, clf, n=10, prettify=True)<jupyter_output>N:
! ! ! A guapa rt s! rac eli 1 ([-0.83341789 -0.71006416 -0.69528888 -0.39891781 -0.39116628 -0.35919981
-0.35230278 -0.34752441 -0.34490157 -0.34457249])
odi me no od me triste n no no no ([0.45335267 0.49109051 0.49409086 0.50665793 0.54603738 0.59735805
0.66252618 0.77205035 0.90183338 0.91858853])
NEU:
j ap ño am ? y L L uc ol ([-0.54014245 -0.49413524 -0.46230685 -0.44611203 -0.42453641 -0.42350058
-0.42126836 -0.41817165 -0.40529656 -0.40098482])
ser cereal cereal hacen tio Estoy nerviosa Lo hecho , dicho lo plan bonito serio viejas ([0.59900622 0.59900622 0.61462497 0.63610812 0.64944056 0.65468827
0.6589533 0.73702051 0.95766574 1.04788207])
NONE:
mu mu m me á no me muy muy uy ([-0.63417026 -0.62699317 -0.568688 -0.56681019 -0.54344921 -0.51474655
-0.50087856 -0.48762919 -0.48762919 -0.48095096])
ct 0 stra si Votado abstracto juntos ? ? ? ([0.5468255 0.55525434 0.57163574 0.58832631 0.78823053 0.91056084
0.92608845 0.98436306 1.0248[...]
|
no_license
|
/sentiment/model_inspection.ipynb
|
agusmdev/PLN-2019
| 1 |
<jupyter_start><jupyter_text># estimator从文件读数据<jupyter_code>import sys
sys.path.append('model/samples/core/get_started')
import iris_data
import tensorflow as tf
tf.enable_eager_execution()
train_path, test_path = iris_data.maybe_download()
train_path
test_path
!head /Users/rosen/.keras/datasets/iris_test.csv
ds = tf.data.TextLineDataset(train_path).skip(1)
COLUMNS = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'label']
def _parse_line(line):
FIELD_DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0]]
fields = tf.decode_csv(line, FIELD_DEFAULTS)
features = dict(zip(COLUMNS, fields))
label = features.pop('label')
return features, label
def csv_input_fn(train_path, batch_size):
ds = tf.data.TextLineDataset(train_path).skip(1)
ds = ds.map(_parse_line).shuffle(1000).repeat().batch(100)
return ds
ds = ds.map(_parse_line).shuffle(1000).repeat().batch(100)
feature_columns = [
tf.feature_column.numeric_column(name)
for name in iris_data.CSV_COLUMN_NAMES[:-1]]
# Build the estimator
est = tf.estimator.LinearClassifier(feature_columns,
n_classes=3)
# Train the estimator
batch_size = 100
est.train(
steps=1000,
input_fn=lambda : iris_data.csv_input_fn(train_path, batch_size))<jupyter_output>INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /var/folders/1r/rmk6zkmx6292q5t2s2pvk_h00000gn/T/tmp_8lbr0z8
INFO:tensorflow:Using config: {'_model_dir': '/var/folders/1r/rmk6zkmx6292q5t2s2pvk_h00000gn/T/tmp_8lbr0z8', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x125af97f0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_nu[...]
|
no_license
|
/TensorFlow/.ipynb_checkpoints/0031-从文件读数据到estimator-checkpoint.ipynb
|
RosenX/DataScienceNoteDiary
| 1 |
<jupyter_start><jupyter_text># WGS PiPeline<jupyter_code>from __future__ import print_function
import os.path
import pandas as pd
import gzip
import sys
import numpy as np
sys.path.insert(0, '..')
from src.CCLE_postp_function import *
from JKBio import Datanalytics as da
from JKBio import TerraFunction as terra
from JKBio import Helper as h
from gsheets import Sheets
from taigapy import TaigaClient
import dalmatian as dm
from sklearn.manifold import TSNE
from sklearn.neighbors import KNeighborsClassifier
from bokeh.plotting import *
from bokeh.models import HoverTool
from collections import OrderedDict
from IPython.display import Image,display
%load_ext autoreload
%autoreload 2
%load_ext rpy2.ipython
tc = TaigaClient()
output_notebook()
sheets = Sheets.from_files('~/.client_secret.json', '~/.storage.json')
replace = {'T': 'Tumor', 'N': 'Normal', 'm': 'Unknown', 'L': 'Unknown'}<jupyter_output>The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
The rpy2.ipython extension is already loaded. To reload it, use:
%reload_ext rpy2.ipython
<jupyter_text>## boot up
we are instanciating all the parameters needed for this pipeline to run<jupyter_code>samplesetname = "20Q3"
prevname="20Q2"
prevversion = 22
prevprevname ='20Q1'
prevprevversion= 20
virtual_public='public-20q3-3d35'
virtual_dmc='dmc-20q3-033d'
virtual_internal='internal-20q3-00d0'
workspace1="broad-genomics-delivery/Getz_IBM_CellLines_Exomes"
workspace2="broad-firecloud-ccle/CCLE_DepMap_WES"
workspace3="broad-genomics-delivery/CCLE_DepMap_WES"
workspace6="terra-broad-cancer-prod/CCLE_DepMap_WES"
refworkspace="broad-firecloud-ccle/"
source1="ibm"
source2="ccle"
source3="ccle"
source6="ccle"
refsheet_url = "https://docs.google.com/spreadsheets/d/1XkZypRuOEXzNLxVk9EOHeWRE98Z8_DBvL4PovyM01FE"
sheeturl = "https://docs.google.com/spreadsheets/d/115TUgA1t_mD32SnWAGpW9OKmJ2W5WYAOs3SuSdedpX4"
release = samplesetname
%%R
release <- '20Q3'
prevname <- '20Q2'
genome_version <- 'hg19'
taiga_version <- 10
prevversion <-13
wm1 = dm.WorkspaceManager(workspace1)
wm2 = dm.WorkspaceManager(workspace2)
wm3 = dm.WorkspaceManager(workspace3)
wm6 = dm.WorkspaceManager(workspace6)
refwm = dm.WorkspaceManager(refworkspace)
extract_to_change = {'from_arxspan_id': 'participant'}
ccle_refsamples = sheets.get(refsheet_url).sheets[0].to_frame()<jupyter_output><empty_output><jupyter_text>## Adding new data
We are looking for new samples in a range of workspaces.
They are quite messy and might contains duplicates, contain broken file paths...
- We are thus looking at the bam files one by one and comparing them with our own bams.
- We remove broken files, duplicates and add new version of a cell line's bam if we find some.<jupyter_code># we will be missing "primary disease","sm_id", "cellosaurus_id", "gender, "age", "primary_site", "primary_disease", "subtype", "subsubtype", "origin", "comments"
#when SMid: match==
samples, pairs, noarxspan = GetNewCellLinesFromWorkspaces(refworkspace, stype='wes', refurl=refsheet_url, wmfroms = [workspace1, workspace2, workspace3, workspace6], sources=[source1, source2, source3, source6], match=['ACH-','CDS-'], participantslicepos=10, accept_unknowntypes=True, extract=extract_to_change, recomputedate=True)
# I am trying to remove duplicates from samples without arxspan ids to then look more into them and see if I have to get data for them or if I should just throw them out
toremov=set()
for k, val in noarxspan.iterrows():
withsamesize = noarxspan[noarxspan["sample_id"] == val["sample_id"]]
if len(withsamesize) > 1:
for l, v in withsamesize.iloc[1:].iterrows():
toremov.add(l)
for i in toremov:
noarxspan = noarxspan.drop(i)
noarxspan
noarxspan.sample_id = [i.split('_Exom')[0] for i in noarxspan.sample_id]
len(noarxspan)
noarxspan['ccle_name'] = [''.join(i.split('_')[1:-1]).split('_v')[0] for i in noarxspan.sample_id]
noarxspan['readgroup'] = [i.split('_')[0] for i in noarxspan.sample_id]
for i, v in noarxspan.iterrows():
if not gcp.exists(v['cram_or_bam_path']):
print(v.ccle_name)
noarxspan = noarxspan.drop(i)
noarxspan['ccle_name'].tolist()
len(noarxspan)
toupdate = {"gender":[],
"primary_disease":[],
"sm_id":[],
"cellosaurus_id":[],
"age":[],
"primary_site":[],
"subtype":[],
"subsubtype":[],
"origin":[],
"comments":[],
"patient_id":[]}
samples
# If I have a previous samples I can update unknown data directly
index=[]
notfound=[]
for k, val in samples.iterrows():
dat = ccle_refsamples[ccle_refsamples['arxspan_id']==val['arxspan_id']]
if len(dat)>0:
index.append(k)
for k, v in toupdate.items():
toupdate[k].append(dat[k].tolist()[0])
else:
notfound.append(k)
len(index)
# doing so..
for k, v in toupdate.items():
samples.loc[index,k] =v
len(samples.loc[notfound].patient_id)
samples.loc[notfound].patient_id.tolist()
# for these samples I will need to check and manually add the data in the list
samples.loc[notfound]
# found same patient
a = ["ACH-000635","ACH-000717", "ACH-000864", "ACH-001042", "ACH-001547"]
b = ["ACH-002291","ACH-001672"]
ccle_refsamples[ccle_refsamples.arxspan_id.isin(a)].patient_id
# duplicate ach-id
dup = {"ACH-001620": "ACH-001605",
"ACH-001621": "ACH-001606"}
samples = changeCellLineNameInNewSet(new = samples, ref=ccle_refsamples, datatype="rna", dupdict=dup)
#rename ccle_name TODO: ask becky what to do
rename = {"PEDS117": "CCLFPEDS0009T"}
len(notfound)<jupyter_output><empty_output><jupyter_text>## getting the addtional data and writing it here in the right order 'as shown above'
- use the stripped_cell_line_name to find the samples on https://docs.google.com/spreadsheets/d/1uqCOos-T9EMQU7y2ZUw4Nm84opU5fIT1y7jet1vnScE/edit#gid=356471436.
- Make sure that we don't have duplicate cell lines in there. Otherwise, use the duplicate renaming function
- copy Primary Site, Primary Disease, Subtype, Comments, Disease Sub-subtype, if they exist. (sometimes subtype and subsubtype are the same.. don't use subsubtype then.
- look for the cell line in cellosaurus, you might need to use one of the aliases given in master depmap pv..
- copy cellosaurus_id gender age info or write 'U' if they don't exist. 'can be a number or {Embryonic, Children, Adult, Fetus, U}
- check that it does not say this cell line is not a duplicate from another cell line
- check that if it says this cell line is derived/children/father/samepatient from other cell lines, and that if we have any of the other cell lines, that the patient id is changed to be the same one for all (be sure that you are updating everywhere these patient ids are used)<jupyter_code>toupdate = {"gender":["Female","Female","Female"],
"primary_disease":["Breast Cancer","Breast Cancer","Breast Cancer"],
"cellosaurus_id":["CVCL_7932","CVCL_7931","CVCL_7933"],
"age":[37,37,37],
"primary_site":["pleural_effusion","pleural_effusion","breast"],
"subtype":["Carcinoma","Carcinoma","Carcinoma"],
"subsubtype":["","",""],
"comments":["HER2+; Received from Academic lab (Polyak, DFCI)","HER2+; Received from Academic lab (Polyak, DFCI)","HER2+; Received from Academic lab (Polyak, DFCI)"],
"stripped_cell_line_name":["21MT2","21MT1", "21NT"],
"patient_id":['PT-y3RbI7uD', 'PT-y3RbI7uD', 'PT-y3RbI7uD']}
a = pd.DataFrame(toupdate)
a['name'] = samples.loc[notfound,"stripped_cell_line_name"].tolist()
a
# updating..
for k, v in toupdate.items():
samples.loc[notfound,k] =v
# uploading to our bucket (now a new function)
samples = h.changeToBucket(samples,'gs://cclebams/wes/', values=['internal_bam_filepath','internal_bai_filepath'], catchdup=False)
samples
# saving and updating the spreadsheet with these
print("YOU NOW NEED TO UPDATE THE GOOGLE SHEET!")
samples.to_csv('temp/new_ccle_samples.csv')
samples['arxspan_id'].tolist()
samples
samples = samples.rename(columns={'patient_id':'participant_id'})
pairs
pairs = pairs.rename(columns={'patient_id':'participant_id'}).set_index('pair_id')
pairs.participant_id = samples.participant_id.tolist()
sam = cnwm.get_samples()
sam[sam['baits']=="AGILENT"]
#uploading new samples to mut
refwm = refwm.disable_hound()
refwm.upload_samples(samples)
refwm.upload_entities('pairs', pairs)
refwm.update_pair_set(pair_set_id=samplesetname,pair_ids=pairs.index)
sam = refwm.get_samples()
pair = refwm.get_pairs()
refwm.update_pair_set(pair_set_id='all',pair_ids=pair.index)
refwm.update_pair_set(pair_set_id='all_agilent',pair_ids=pair[pair["case_sample"].isin(sam[sam['baits']=="AGILENT"].index.tolist())].index)
refwm.update_pair_set(pair_set_id='all_ice',pair_ids=pair[pair["case_sample"].isin([i for i in sam[(sam['baits'] == "ICE") |(sam['baits'].isna())].index.tolist() if i != 'nan'])].index)
#creating a sample set
refwm.update_sample_set(sample_set_id=samplesetname, sample_ids=samples.index)
refwm.update_sample_set(sample_set_id='all', sample_ids=[i for i in sam.index.tolist() if i!='nan'])
refwm.update_sample_set(sample_set_id='all_agilent', sample_ids = sam[sam['baits'] == "AGILENT"].index.tolist())
refwm.update_sample_set(sample_set_id='all_ice', sample_ids=[i for i in sam[(sam['baits'] == "ICE") |(sam['baits'].isna())].index.tolist() if i != 'nan'])
#and CN
cnwm = dm.WorkspaceManager('broad-firecloud-ccle/DepMap_WES_CN_hg38')
cnwm = cnwm.disable_hound()
cnwm.upload_samples(samples)
cnwm.upload_entities('pairs', pairs)
cnwm.update_pair_set(pair_set_id=samplesetname,pair_ids=pairs.index)
sam = cnwm.get_samples()
pair = cnwm.get_pairs()
cnwm.update_pair_set(pair_set_id='all',pair_ids=pair.index)
cnwm.update_pair_set(pair_set_id='all_agilent',pair_ids=pair[pair["case_sample"].isin(sam[sam['baits']=="AGILENT"].index.tolist())].index)
cnwm.update_pair_set(pair_set_id='all_ice',pair_ids=pair[pair["case_sample"].isin([i for i in sam[(sam['baits'] == "ICE") |(sam['baits'].isna())].index.tolist() if i != 'nan'])].index)
#creating a sample set
#cnwm.update_sample_set(sample_set_id=samplesetname, sample_ids=samples.index)
#cnwm.update_sample_set(sample_set_id='all', sample_ids=[i for i in sam.index.tolist() if i!='nan'])
#cnwm.update_sample_set(sample_set_id='all_agilent', sample_ids = sam[sam['baits'] == "AGILENT"].index.tolist())
cnwm.update_sample_set(sample_set_id='all_ice', sample_ids=[i for i in sam[(sam['baits'] == "ICE") |(sam['baits'].isna())].index.tolist() if i != 'nan'])<jupyter_output><empty_output><jupyter_text>## Check that we have all the cell lines we expect for this release
This involves comparing to the list in the Google sheet "Cell Line Profiling Status."
_As the list cannot be parsed, we are not comparing it for now_<jupyter_code># this function may not work - it hasn't been tested
url = 'https://docs.google.com/spreadsheets/d/1qus-9TKzqzwUMNWp8S1QP4s4-3SsMo2vuQRZrNXf7ag/edit?ts=5db85e27#gid=0&fvid=1627883727'
compareToCuratedGS(url, sample = newsample[0], samplesetname = samplesetname, colname = 'CN New to internal')<jupyter_output><empty_output><jupyter_text># run the pipeline
We are using Dalmatian to send request to Terra, we are running a set of 5 functions To generate the mutation dataset:
* For new samples in DepMap, run the ICE version of this task. CCLE2 samples used Agilent targets, so this pipeline should be used instead. The pipelines are identical in terms of their outputs, but the proper targets, baits, and pseudo normal should be used based on how the samples were sequenced.
**ICE_CGA_Production_Analysis_Pipeline_Cell_Lines_copy** (cclf/CGA_Production_Analysis_Pipeline_Cell_Lines_debuggingSnapshot ID: 22) OR
**AGILENT_CGA_Production_Analysis_Pipeline_Cell_Lines** (cclf/CGA_Production_Anablysis_Pipeline_Cell_Lines_debuggingSnapshot ID: 22)
* **common_variant_filter** (breardon/common_variant_filterSnapshot ID: 3)
* **filterMAF_on_CGA_pipeline** (gkugener/filterMAF_on_CGA_pipelineSnapshot ID: 8)
* **aggregateMAFs_selectFields** (ccle_mg/aggregateMAFs_selectFieldsSnapshot ID: 1)
This outputs to be downloaded will be saved in the sample set that was run. The output we use for the release is:
* **passedCGA_filteredMAF_aggregated**
There are several other tasks in this workspace. In brief:
* **CGA_Production_Analysis_Pipeline_Cell_Lines** (lelagina/CGA_Production_Analysis_Pipeline_Cell_LinesSnapshot ID: 12). This task is the same as the ICE and AGILENT prefixed version above, except that it relied on pulling the baits and targets to use from the metadata stored for the samples. Having AGILENT and ICE versions specified made the uploading and running process easier.
* **SANGER_CGA_Production_Analysis_Pipeline_Cell_Lines** (cclf/CGA_Production_Analysis_Pipeline_Cell_Lines_debuggingSnapshot ID: 22). This task was trying to run the CGA pipeline on the Sanger WES data, using a Sanger pseudo normal. In its current implementation, this task fails to complete for the samples.
* **UNFILTERED_aggregateMAFs_selectFields** (ccle_mg/aggregateMAFs_selectFieldsSnapshot ID: 1). Aggregates the MAF outputted by the CGA cell line pipeline prior to the common variant filter and germline filtering tasks. This can give us insight to which mutations are getting filtered out when. We may want to potentially include this MAF in the release so people can see why certain mutations of interest may be getting filtered out.
* WES_DM_Mutation_Calling_Pipeline_(standard |expensive) (gkugener/WES_DM_Mutation_Calling_PipelineSnapshot ID: 2). This was a previous mutation calling pipeline implemented for CCLE. We do not use this pipeline any more as the CGA pipeline looks better.
* aggregate_filterMAF_CGA (CCLE/aggregate_filterMAF_CGASnapshot ID: 1). An aggregation MAF task that we used in the past. We do not use this task anymore.
* calculate_mutational_burden (breardon/calculate_mutational_burdenSnapshot ID: 21). This task can be used to calculate the mutational rate of the samples. We do not make use of this data in the release although it could be of interest.
* summarizeWigFile (breardon/summarizeWigFileSnapshot ID: 5). CCLF ran this task (might be necessary for the mutational burden task). For our workflow, we do not run it.## On Terra<jupyter_code>submission_id = refwm.create_submission("CGA_WES_CCLE_ICE", samplesetname,'sample_set',expression='this.samples')
terra.waitForSubmission(refworkspace, submission_id)<jupyter_output><empty_output><jupyter_text>### copy pairs data to sample data<jupyter_code>pairs = refwm.get_pairs()
pairs = pairs[pairs.index.isin(tokeep)]
pairs = pairs[~pairs['mutation_validator_validated_maf'].isna()]
pairs = pairs.drop(columns=['case_sample','control_sample','participant_id'])
pairs.index = [i.split('_')[0] for i in pairs.index]
refwm.update_sample_attributes(pairs)<jupyter_output><empty_output><jupyter_text>Continuing<jupyter_code>submission_id = refwm.create_submission("common_variant_filter", samplesetname, 'sample_set', expression='this.samples')
terra.waitForSubmission(refworkspace, submission_id)
submission_id = refwm.create_submission("filterMAF_on_CGA_pipeline", samplesetname,'sample_set',expression='this.samples')
terra.waitForSubmission(refworkspace, submission_id)<jupyter_output><empty_output><jupyter_text>### filtered<jupyter_code>submission_id1 = refwm.create_submission("aggregateMAFs_selectFields_copy", samplesetname)<jupyter_output><empty_output><jupyter_text>### unfiltered<jupyter_code>submission_id2 = refwm.create_submission("aggregateMAFs_selectFields_unfiltered", samplesetname)
terra.waitForSubmission(refworkspace, [submission_id1,submission_id2])<jupyter_output><empty_output><jupyter_text>### Germline### Save the workflow configurations used<jupyter_code>terra.saveConfigs(refworkspace,'./data/'+samplesetname+'/Mutconfig')<jupyter_output><empty_output><jupyter_text>## On local
### Remove some datafile to save money¶<jupyter_code>res = refwm.get_samples()
toremove = ["fixedmate_bam"]
for val in toremove:
refwm.disable_hound().delete_entity_attributes('sample', res[val], delete_files=True)
! gsutil -m rm "gs://fc-secure-012d088c-f039-4d36-bde5-ee9b1b76b912/9e3cc501-3f08-47fb-87a5-0359febb833c/**/call-tumorMM_Task/*.cleaned.bam"
# sometimes it does not work so better check again
a = res.fixedmate_bam
a = [i for i in a if i is not np.nan]
gcp.rmFiles(a)<jupyter_output><empty_output><jupyter_text>### downloading from terra<jupyter_code>sam = refwm.get_samples()
nowes = set(mutations.DepMap_ID)-set(sam.arxspan_id)
nowes
mutations.columns
nothing = nows -set(ccle_refsamples.arxspan_id)
nothing
set(mutations[mutations.DepMap_ID.isin(nothing) & ~mutations.SangerWES_AC.isna()].DepMap_ID)
res = refwm.get_sample_sets().loc["all"]
res
filtered = res['filtered_CGA_MAF_aggregated']
! gsutil cp $filtered "temp/mutation_filtered_terra_merged.txt"<jupyter_output><empty_output><jupyter_text>### get QC files<jupyter_code>dataMut = getWESQC(workspace=refworkspace ,only=[], qcname=["gatk_cnv_all_plots", "lego_plotter_pngs", "copy_number_qc_report", "ffpe_OBF_figures", "mut_legos_html", "oxoG_OBF_figures", "tumor_bam_base_distribution_by_cycle_metrics", "tumor_bam_converted_oxog_metrics"])
dataBam = getWESQC(workspace=refworkspace ,only=[], qcname=[ "tumor_bam_alignment_summary_metrics", "tumor_bam_bait_bias_summary_metrics", "tumor_bam_gc_bias_summary_metrics", "tumor_bam_hybrid_selection_metrics", "tumor_bam_insert_size_histogram", "tumor_bam_insert_size_metrics", "tumor_bam_pre_adapter_summary_metrics", "tumor_bam_quality_by_cycle_metrics", "tumor_bam_quality_distribution_metrics", "tumor_bam_quality_yield_metrics"])
new_refsamples = pd.read_csv('temp/newrefCN.csv',index_col="cds_sample_id")
for k,v in dataMut.items():
if k =='nan':
continue
new_refsamples.loc[k,'processing_qc'] = str(v) + ',' + new_refsamples.loc[k,'processing_qc']
for k,v in dataBam.items():
if k =='nan':
continue
new_refsamples.loc[k,'bam_qc'] = str(v) + ',' + new_refsamples.loc[k,'bam_qc']
new_refsamples.to_csv('temp/newrefWES.csv')<jupyter_output><empty_output><jupyter_text>### retrieving unfiltered mutations<jupyter_code>unfiltered = res['unfiltered_CGA_MAF_aggregated']
! gsutil cp $unfiltered "temp/mutation_unfiltered_terra_merged.txt"<jupyter_output><empty_output><jupyter_text>### retrieving RNAseq vcfs<jupyter_code>rnamutations = dm.WorkspaceManager(rnaworkspace).get_sample_sets().loc['All_samples']['merged_vcf']
mutations[mutations.DepMap_ID=="ACH-000045"]<jupyter_output><empty_output><jupyter_text>### retrieving germline mutations### postprocessing
Here, rather than rerunning the entire analysis, because we know we are adding only WES samples, we can download the previous release's MAF, add the samples, update any annotations, and perform any global filters at the end.
First we need to do an additional step of filtering on coverage and number
- readMutations
- createSNPs
- addToMainMutation
- filterAllelicFraction
- filterMinCoverage
- mergeAnnotations
- addAnnotation
- maf_add_variant_annotations
- mutation_maf_to_binary_matrix (x3)<jupyter_code>file = pd.read_csv('temp/mutation_filtered_terra_merged.txt',sep='\t')
print(file.columns[:10])
renaming = removeOlderVersions(names = set(file['Tumor_Sample_Barcode']), refsamples = refwm.get_samples(), arxspan_id = "arxspan_id", version="version")
print(file[file['Chromosome']=='0'])
file[file['Tumor_Sample_Barcode'].isin(renaming.keys())].replace({'Tumor_Sample_Barcode':renaming}).reset_index(drop=True).to_csv('temp/mutation_filtered_terra_merged.txt',sep='\t',index=None)
renaming<jupyter_output><empty_output><jupyter_text>#### saving samples used for 20Q2<jupyter_code>ccle_refsamples.loc[renaming.keys(),version]=1
new_refsamples.to_csv('temp/newrefWES.csv')
%%R
source('src/load_libraries_and_annotations.R')
load('src/DM_Omics/Annotations.rdata')
# There are some cell lines the celllinemapr does not know how to map so we need to load this data object for now (from old datasets)
source('src/CCLE_postp_function.R')
library(tidyverse)
library(data.table)
library(magrittr)
library(taigr)
library(cdsomics)
library(celllinemapr) # To pull out DepMap_IDs from CCLE_names where needed
%%R
newly_merged_maf <- readMutations('temp/mutation_filtered_terra_merged.txt')
new_release <- createSNPs(newly_merged_maf)
names(new_release)
%%R
previous.release.maf <- load.from.taiga(data.name='depmap-mutations-maf-35fe', data.file=paste0('mutations.',prevname),data.version=prevversion)
if (colnames(previous.release.maf)[1] == 'X1' || colnames(previous.release.maf)[1] == "") {
previous.release.maf[,1] <- NULL
}
prevnames <- names(previous.release.maf)
prevnames
%%R
print(nrow(previous.release.maf))
merged <- addToMainMutation(previous.release.maf, new_release)
print(nrow(merged))
%%R
## Adding more
newly_merged_maf <- readMutations('temp/mutation_filtered_terra_merged.txt')
new_release <- createSNPs(newly_merged_maf)
print(names(new_release))
merged <- addToMainMutation(merged, new_release)
nrow(merged)
%%R
setdiff(names(merged), names(previous.release.maf))
%%R
## check if some rows have nans
length(which(is.na(merged$Hugo_Symbol)))
%%R
dim(merged)
%%R
filtered <- filterAllelicFraction(merged)
%%R
head(new_release[which(new_release$Tumor_Sample_Barcode=='ACH-002466'),])
%%R
filtered <- filterMinCoverage(filtered$merged, filtered$removed_from_maf)
%%R
head(merged)
%%R
clean_annotations <- mergeAnnotations(merged,previous.release.maf)
%%R
# Guillaume's version
new_release <- addAnnotation(filtered$merged, clean_annotations, colnames(previous.release.maf))
# Allie's version
new_release <- maf_add_variant_annotations(new_release)
%%R
# some matric files that does get used internaly and might be useful
damaging_mutation <- mutation_maf_to_binary_matrix(new_release, damaging = TRUE)
other_mutation <- mutation_maf_to_binary_matrix(new_release, other = TRUE)
hotspot_mutation <- mutation_maf_to_binary_matrix(new_release, hotspot = TRUE)
%%R
# Save the ready to upload file to upload to taiga
write.table(
new_release,
paste0('temp/mutations.', release, '.all.csv'), sep = ',', quote = F, row.names = F)
# Save the ready to upload file to upload to taiga
write.table(
damaging_mutation,
paste0('temp/damaging_mutation.', release, '.all.csv'), sep = ',', quote = F)
# Save the ready to upload file to upload to taiga
write.table(
other_mutation,
paste0('temp/other_mutation.', release, '.all.csv'), sep = ',', quote = F)
# Save the ready to upload file to upload to taiga
write.table(
hotspot_mutation,
paste0('temp/hotspot_mutation.', release, '.all.csv'), sep = ',', quote = F)<jupyter_output><empty_output><jupyter_text># Validation## Compare to previous release
I would run some checks here comparing the results to the previous releases MAF. Namely:
- Count the total number of mutations per cell line, split by type (SNP, INS, DEL)
- Count the total number of mutations observed by position (group by chromosome, start position, end position and count the number of mutations)
- Look at specific differences between the two MAFs (join on DepMap_ID, Chromosome, Start position, End position, Variant_Type). I would do this for WES only<jupyter_code>mutations = pd.read_csv('temp/mutations.'+release+'.all.csv')
damaging_mutation = pd.read_csv('temp/damaging_mutation.'+release+'.all.csv')
print(len(damaging_mutation))
other_mutation = pd.read_csv('temp/other_mutation.'+release+'.all.csv')
print(len(other_mutation))
hotspot_mutation = pd.read_csv('temp/hotspot_mutation.'+release+'.all.csv')
print(len(hotspot_mutation))
mutations.columns
set(mutations.DepMap_ID) - set(mutations[~(mutations['CGA_WES_AC'].isna() & mutations['SangerWES_AC'].isna() & mutations['WGS_AC'].isna() & mutations['SangerRecalibWES_AC'].isna())].DepMap_ID)
len(a)
mutations[mutations.DepMap_ID=="ACH-000458"].sum(0)
mutations[mutations["Hugo_Symbol"]=="ACOT4"][mutations['Start_position']==74058831]
ac_data = mutations[[val for val in mutations.columns.values if '_AC' in val]]
ac_names = ac_data.columns.values
ac_data = ac_data.values
ac_data.shape[0]<jupyter_output><empty_output><jupyter_text>## Do some checks and manual rescuing<jupyter_code>mutations[mutations.DepMap_ID=="ACH-003000"]<jupyter_output><empty_output><jupyter_text>## check important mutations<jupyter_code># check MOLM13, MV411 cell lines- The well known mutation status of FLT3
# check TP53 mutation
toofew = 0
allnan = 0
for pos, val in enumerate(ac_data):
i = 0
print(str(100*pos/ac_data.shape[0]),end='\r')
for p, v in enumerate(val):
if v is np.nan:
i+=1
if i==7:
mutations = mutations.drop[pos]
allnan+=1
allnan<jupyter_output><empty_output><jupyter_text>### basic counts<jupyter_code>#Count the total number of mutations per cell line, split by type (SNP, INS, DEL)
# Count the total number of mutations observed by position<jupyter_output><empty_output><jupyter_text>Are mutation consistent?<jupyter_code># to check this, if you group all the mutations in the mutations table by Chromosome, Start_position, End_position, Reference_Allele, Tumor_Seq_Allele1 columns, they should all have the same annotation for the other columns (protein change, exac_af, etc...)<jupyter_output><empty_output><jupyter_text>QC mutations, for a known dependency, check if it matches mutation of this gene. (if P53 is mutated, cannot have dependency on P53 or MDM2 MDM4/ inverse fir BRAF and KRAF to themselves)<jupyter_code>prevprevname,prevprevversion
mutations[mutations.DepMap_ID=="ACH-001546"][mutations.columns[-17:]]
prevprev= set(tc.get(name='depmap-mutation-calls-9be3', file= "depmap_"+prevprevname+"_mutation_calls", version = prevprevversion).DepMap_ID.tolist())<jupyter_output><empty_output><jupyter_text># uploading on taiga<jupyter_code>gsheets = sheets.get(sheeturl).sheets[6].to_frame()
wes_dmc_embargo = [i for i in gsheets['WES_DMC_embargo'].values.tolist() if str(i) != "nan"]
wes_embargo = [i for i in gsheets['WES_embargo'].values.tolist() if str(i) != "nan"]
blacklist = [i for i in gsheets['blacklist'].values.tolist() if str(i) != "nan"]
wes_embargo, wes_dmc_embargo, blacklist
! cd .. && git clone https://github.com/broadinstitute/depmap-release-readmes.git && cd -
! cd ../depmap-release-readmes && git pull
!cd ../depmap-release-readmes/ && python3 make_new_release.py $release && git add . && git commit -m $release && git push
os.system('cd ../depmap-release-readmes && git pull && mv release-'+release+'/internal-'+release+'.txt ../ccle_processing/temp/README && cd -')
tc.update_dataset(dataset_permaname="depmap-mutations-maf-35fe",
upload_file_path_dict={'temp/mutations.'+release+'.all.csv': 'TableCSV'},
dataset_description="""
# Mutations
filtered and unfiltered mutation files from Broad WES and Sanger WES data mapped to hg19
The MAF file for DepMap that includes all of the latest WES samples. This MAF is generated by merging CCLE (WGS, RNAseq, RD, HC) and Sanger (WES) data.
PORTAL TEAM SHOULD NOT USE THIS: There are lines here that should not make it even to internal. Must use subsetted dataset instead. These data will not make it on the portal starting 19Q1. With the DMC portal, there is new cell line release prioritization as to which lines can be included, so a new taiga dataset will be created containing CN for the portal.
version 1: In 19Q1 the WES_AC column has been replaced by two columns, VA_WES_AC and CGA_WES_AC. We are currently using the Van Allen and CGA based pipeline to generate mutation calls. The CGA pipeline includes more filtering on the MAFs than VA and has a better INDEL caller. However, some of these filters may be removing some variants of interest that are still capture by the VA pipeline, which is why both a retained for now. DEPRECATED: Missing the VA_WES_AC, CGA_WES_AC columns
version 2: 19Q1 data
version 3: 19Q2 data. We are no longer using the CCLE_WES_AC column. We are only using the CGA pipeline for mutation calls.
version 4: Updating to 19Q3interim DEPRECATED
version 5: Updating to 19Q3interim DEPRECATED
version 6: Updating to 19Q3interim
version 7: Updating to 19Q3 DEPRECATED
version 8: reparing the missing mutation problem DEPRECATED
version 9: reparing the missing column problem
version10:
Adding 52 new cell lines.
Some cells lines have been flagged as:
version11:
adding missing cell lines
Adding 52 new cell lines.
Some cells lines have been flagged as:
- having bad looking copy ration plots =
- Genes having a similar CN value accross all []
version 12:
adding 8 new cell lines
version 13:
removing a wrong column
version 14:
adding 8 new cell lines. Adding .all. since we are soon going to release a restricted set of mutations. this one contains everything which is not necessarily what we want
genes (gene rpkm):
__Rows__:
__Columns__:
Counts (gene counts):
__Rows__:
__Columns__:
Gene level CN data:
__Rows__:
__Columns__:
DepMap cell line IDs
gene names in the format HGNC\_symbol (Entrez\_ID)
DepMap\_ID, Chromosome, Start, End, Num\_Probes, Segment\_Mean
""")<jupyter_output><empty_output><jupyter_text>## Internal<jupyter_code>hotspot_mutation
prevmut = tc.get(name='depmap-mutation-calls-9be3', version=24, file='depmap_'+prevname+'_mutation_calls.all')
print('shoud be None')
print(set(prevmut.DepMap_ID) - set(mutations.DepMap_ID))
print("new lines")
newlines = set(mutations.DepMap_ID) - set(prevmut.DepMap_ID)
newlines
tc.update_dataset(dataset_permaname="depmap-mutation-calls-9be3",
upload_file_path_dict={'temp/depmap_'+release+'_mutation_calls.all': 'TableCSV',
'temp/damaging_mutation.all': 'NumericMatrixCSV',
'temp/other_mutation.all': 'NumericMatrixCSV',
'temp/hotspot_mutation.all': 'NumericMatrixCSV',
},#'temp/README': 'Raw'},
dataset_description="""
# Internal Mutations
Mutation calls for Internal DepMap data
* Version 1 Internal 18Q1*
original source: `/xchip/ccle_dist/broad_only/CMAG/mutations/CCLE_depMap_18Q1_maf_20180202.txt`
* Version 2-4 Internal 18Q2*
merged mutations and indels file (1,606 cell lines, including CCLE and Sanger WES reanalysis)
original source:
`/xchip/ccle_dist/broad_only/CMAG/mutations/CCLE_depMap_18q2_maf_20180502.txt`
Binary matrices:
- damaging: if isDeleterious is true
- missense: if isDeleterious is false
- hotspot: if missense and either TCGA or COSMIC hotspot
Version 2 contains the MAF file
* Version 5-6 Internal 18Q3*
version 5 deprecated
original source: `/xchip/ccle_dist/broad_only/CMAG/mutations/CCLE_depMap_18q3_maf_20180716.txt`
Binary matrices:
- damaging: if isDeleterious is true
- missense: if isDeleterious is false
- hotspot: if missense and either TCGA or COSMIC hotspot
- Rows: cell line, Broad (arxspan) IDs
Columns: Gene, HGNC symbol (Entrez ID)
MAF file
* Version 7-8 Internal 18Q4*
version 8 just changes a column name in the MAF file from Broad_ID to DepMap_ID
original source: `/xchip/ccle_dist/broad_only/CMAG/mutations/CCLE_DepMap_18Q4_maf_20181028.txt`
* Version 9-12 Internal 19Q1*
version 12 updates the column name from VA_WES_AC to CCLE_WES_AC
version 11+ uses an updated definition for hotspot mutations
version 12 contains the correct data for 19Q1
* Version 13 Internal 19Q2*
* Version 14-15 Internal 19Q3*
version 15 fixed entrez ids
* Version 16 Internal 19Q4*
adding 35 new cell lines.
* Version 16 Internal 19Q4*
uploading as matrices
* Version 17 Internal 19Q4*
removing unauthorized lines and setting as matrices
* Version 18 Internal 19Q4*
removing unauthorized lines and setting as matrices
* Version 19 Internal 20Q1*
uploading 8 new lines
* Version 20 Internal 20Q1*
removing unauthorized cl
* Version 21 Internal 20Q2*
uploading 8 new lines and adding .all to express the fact that this data is the aggregate of all different sequencing methods.
* Version 22 Internal 20Q2*
removing 2 cell lines
* Version 23 Internal 20Q3*
nothing different from 20Q2. no new cell lines
* Version 24 Internal 20Q2*
updating the blacklists
*** Variant annotation column ***
MAF file, added column (Variant_annotation) classifying each variant as either silent, damaging, other conserving, or other non-conserving, based on this mapping (old annotation from Variant_Classification column - new annotation):
Silent - silent
Splice_Site - damaging
Missense_Mutation - other non-conserving
Nonsense_Mutation - damaging
De_novo_Start_OutOfFrame - damaging
Nonstop_Mutation - other non-conserving
Frame_Shift_Del - damaging
Frame_Shift_Ins - damaging
In_Frame_Del - other non-conserving
In_Frame_Ins - other non-conserving
Stop_Codon_Del - other non-conserving
Stop_Codon_Ins - other non-conserving
Start_Codon_SNP - damaging
Start_Codon_Del - damaging
Start_Codon_Ins - damaging
5'Flank - other conserving
Intron - other conserving
IGR - other conserving
3'UTR - other conserving
5'UTR - other conserving
Binary matrices:
- damaging: if damaging
- other: if other conserving or other non-conserving
- hotspot: if it is not a silent mutation and is either TCGA or COSMIC hotspot
- Rows: cell line, DepMap (arxspan) IDs
Columns: Gene, HGNC symbol (Entrez ID)
NEW LINES:
"""+newlines)
# To add to a virtual dataset
AddToVirtual(virtual_internal, 'depmap-mutation-calls-9be3', [('CCLE_mutations', 'depmap_'+release+'_mutation_calls'),])#('README','README')])
# To add to a eternal dataset
AddToVirtual('depmap-a0ab', 'depmap-mutation-calls-9be3', [('CCLE_mutations', 'depmap_'+release+'_mutation_calls')])<jupyter_output><empty_output><jupyter_text>## DMC<jupyter_code>os.system('cd ../depmap-release-readmes && git pull && mv release-'+releAse+'/dmc-'+releAse+'.txt ../ccle_processing/temp/README && cd -')
print(len(mutations))
mutations = mutations[~mutations.DepMap_ID.isin(wes_embargo)]
print(len(mutations))
mutations.to_csv('temp/depmap_'+release+'_mutation_calls.all', index=False)
print(len(damaging_mutation))
damaging_mutation = damaging_mutation[~damaging_mutation.index.isin(wes_embargo)]
print(len(damaging_mutation))
damaging_mutation.to_csv('temp/damaging_mutation.all')
print(len(other_mutation))
other_mutation = other_mutation[~other_mutation.index.isin(wes_embargo)]
print(len(other_mutation))
other_mutation.to_csv('temp/other_mutation.all',)
print(len(hotspot_mutation))
hotspot_mutation = hotspot_mutation[~hotspot_mutation.index.isin(wes_embargo)]
print(len(hotspot_mutation))
hotspot_mutation.to_csv('temp/hotspot_mutation.all',)
prevmut = tc.get(name='depmap-mutation-calls-dfce', version=15, file='depmap_'+prevname+'_mutation_calls')
print('shoud be None')
print(set(prevmut.DepMap_ID) - set(mutations.DepMap_ID))
print("new lines")
newlines = set(mutations.DepMap_ID) - set(prevmut.DepMap_ID)
newlines
tc.update_dataset(dataset_permaname="depmap-mutation-calls-dfce",
upload_file_path_dict={'temp/depmap_'+release+'_mutation_calls.all': 'TableCSV',
'temp/damaging_mutation.all': 'NumericMatrixCSV',
'temp/other_mutation.all': 'NumericMatrixCSV',
'temp/hotspot_mutation.all': 'NumericMatrixCSV',
},#'temp/README': 'Raw'},
dataset_description="""
# DMC Mutations
* Version 1-5 DMC 19Q1*
version 5 is a one-off portal thing because dmc wanted to be able to plot if a gene has any mutation as one-hot encoded value in the x/y axes of the data explorer It adds the any_mutation matrix, but does not change the others. Code used to generate:
```
from taigapy import TaigaClient
c = TaigaClient()
dmc_19q1_mutation_taiga_root = "depmap-mutation-calls-dfce.3/"
other_matrix = c.get(dmc_19q1_mutation_taiga_root + "other_mutation")
damaging_matrix = c.get(dmc_19q1_mutation_taiga_root + "damaging_mutation")
hotspot_matrix = c.get(dmc_19q1_mutation_taiga_root + "hotspot_mutation")
df = other_matrix.append(damaging_matrix)
df = df.groupby(level=0).sum()
df = df.append(hotspot_matrix)
df = df.groupby(level=0).sum()
df[df > 1] = 1
df.to_csv('any_mutation.csv')
```
The code uses version 3 because the dmc portal was using version 3
version 4 updates the column name from VA_WES_AC to CCLE_WES_AC
version 3 has an updated definition for hotspot mutations
version 2+ contains the correct data for 19Q1
* Version 6 DMC 19Q2*
* Version 7-8 DMC 19Q3*
version 8 fixed entrez ids
* Version 9 DMC 19Q4*
adding 52 new cell lines.
* Version 10 DMC 19Q4*
removing unauthorized lines and setting as matrices
* Version 11 DMC 19Q4*
removing unauthorized lines and setting as matrices
* Version 12 Internal 20Q1*
uploading 8 new lines
* Version 13 Internal 20Q1*
removing unauthorized cl
* Version 14 Internal 20Q2*
uploading 8 new lines and adding .all to express the fact that this data is the aggregate of all different sequencing methods.
* Version 15 Internal 20Q2*
removing 2 lines
* Version 15 Internal 20Q3*
nothing different from 20Q2. no new cell lines
* Version 15 Internal 20Q3*
updating the blacklists
MAF file, added column (Variant_annotation) classifying each variant as either silent, damaging, other conserving, or other non-conserving, based on this mapping (old annotation from Variant_Classification column - new annotation):
Silent - silent
Splice_Site - damaging
Missense_Mutation - other non-conserving
Nonsense_Mutation - damaging
De_novo_Start_OutOfFrame - damaging
Nonstop_Mutation - other non-conserving
Frame_Shift_Del - damaging
Frame_Shift_Ins - damaging
In_Frame_Del - other non-conserving
In_Frame_Ins - other non-conserving
Stop_Codon_Del - other non-conserving
Stop_Codon_Ins - other non-conserving
Start_Codon_SNP - damaging
Start_Codon_Del - damaging
Start_Codon_Ins - damaging
5'Flank - other conserving
Intron - other conserving
IGR - other conserving
3'UTR - other conserving
5'UTR - other conserving
Binary matrices:
- damaging: if damaging
- other: if other conserving or other non-conserving
- hotspot: if it is not a silent mutation and is either TCGA or COSMIC hotspot
- Rows: cell line, DepMap (arxspan) IDs
Columns: Gene, HGNC symbol (Entrez ID)
NEW LINES:
"""+newlines)
# To add to a virtual dataset
AddToVirtual(virtual_dmc, 'depmap-mutation-calls-dfce', [('CCLE_mutations', 'depmap_'+release+'_mutation_calls'),])#('README','README')])<jupyter_output><empty_output><jupyter_text>## Public<jupyter_code>os.system('cd ../depmap-release-readmes && git pull && mv release-'+releAse+'/public-'+releAse+'.txt README && cd -')
#damaging_mutation
mutations=depmap_20Q3_mutation_calls
#hotspot_mutation
#other_mutation
print(len(mutations))
mutations = mutations[mutations.DepMap_ID.isin(prevprev)]
mutations = mutations[~mutations.DepMap_ID.isin(wes_dmc_embargo)]
print(len(mutations))
mutations.to_csv('temp/depmap_'+release+'_mutation_calls.all', index=False)
print(len(damaging_mutation))
damaging_mutation = damaging_mutation[damaging_mutation.index.isin(prevprev)]
damaging_mutation = damaging_mutation[~damaging_mutation.index.isin(wes_dmc_embargo)]
print(len(damaging_mutation))
damaging_mutation.to_csv('temp/damaging_mutation.all')
print(len(other_mutation))
other_mutation = other_mutation[other_mutation.index.isin(prevprev)]
other_mutation = other_mutation[~other_mutation.index.isin(wes_dmc_embargo)]
print(len(other_mutation))
other_mutation.to_csv('temp/other_mutation.all')
print(len(hotspot_mutation))
hotspot_mutation = hotspot_mutation[hotspot_mutation.index.isin(prevprev)]
hotspot_mutation = hotspot_mutation[~hotspot_mutation.index.isin(wes_dmc_embargo)]
print(len(hotspot_mutation))
hotspot_mutation.to_csv('temp/hotspot_mutation.all')
prevmut = tc.get(name='depmap-mutation-calls-9a1a', version=18, file='depmap_'+prevname+'_mutation_calls')
print('shoud be None')
ermgency_removed = set(prevmut.DepMap_ID) - set(mutations.DepMap_ID)
print(ermgency_removed)
print("new lines")
newlines = set(mutations.DepMap_ID) - set(prevmut.DepMap_ID)
newlines
description="""
# Public Mutations
Mutation calls for Public DepMap data
* Version 1 Public 18Q1*
original source: CCLE data portal
* Version 2 Public 18Q2*
merged mutations and indels file (1,549 cell lines total, including data for 63 newly released cell lines)
original source: `/xchip/ccle_dist/public/DepMap_18Q2/CCLE_DepMap_18Q2_maf_20180502.txt`
* Version 3-4 Public 18Q3*
version 3 deprecated
original source: `/xchip/ccle_dist/public/DepMap_18Q3/CCLE_DepMap_18q3_maf_20180718.txt`
Binary matrices:
damaging: if isDeleterious is true
missense: if isDeleterious is false
hotspot: if missense and either TCGA or COSMIC hotspot
Rows: cell line, Broad (arxspan) IDs
Columns: Gene, HGNC symbol (Entrez ID)
MAF file
* Version 5 Public 18Q4*
original source: `/xchip/ccle_dist/public/DepMap_18Q4/CCLE_DepMap_18q4_maf_20181029.txt`
* Version 6-9 Public 19Q1*
version 9 updates the column name from VA_WES_AC to CCLE_WES_AC
version 8 uses an updated definition for hotspot mutations
version 9 contains the correct data for 19Q1
* Version 10 Public 19Q2*
* Version 11-12 Public 19Q3*
version 12 fixed entrez ids
* Version 13 Public 19Q4*
adding 52 new cell lines
* Version 14 Public 19Q4*
removing unauthorized lines and setting matrices
* Version 15 Public 20Q1*
adding 8 new lines
* Version 16 Public 20Q1*
removing an unauthorized line
* Version 17 Internal 20Q2*
uploading 8 new lines and adding .all to express the fact that this data is the aggregate of all different sequencing methods.
* Version 18 Internal 20Q2*
removing 2 lines
* Version 19 Internal 20Q3*
nothing different from 20Q2. no new cell lines
* Version 20 Internal 20Q3*
updating the blacklists
* Version 21 Internal 20Q3*
updating the dmc
* Version 22 Internal 20Q3*
readding two already released samples to the public list
MAF file, added column (Variant_annotation) classifying each variant as either silent, damaging, other conserving, or other non-conserving, based on this mapping (old annotation from Variant_Classification column - new annotation):
Silent - silent
Splice_Site - damaging
Missense_Mutation - other non-conserving
Nonsense_Mutation - damaging
De_novo_Start_OutOfFrame - damaging
Nonstop_Mutation - other non-conserving
Frame_Shift_Del - damaging
Frame_Shift_Ins - damaging
In_Frame_Del - other non-conserving
In_Frame_Ins - other non-conserving
Stop_Codon_Del - other non-conserving
Stop_Codon_Ins - other non-conserving
Start_Codon_SNP - damaging
Start_Codon_Del - damaging
Start_Codon_Ins - damaging
5'Flank - other conserving
Intron - other conserving
IGR - other conserving
3'UTR - other conserving
5'UTR - other conserving
Binary matrices:
- damaging: if damaging
- other: if other conserving or other non-conserving
- hotspot: if it is not a silent mutation and is either TCGA or COSMIC hotspot
- Rows: cell line, DepMap (arxspan) IDs
Columns: Gene, HGNC symbol (Entrez ID)
NEW LINES:
"""+str(newlines)
if len(ermgency_removed):
description+="""
!! WE REMOVED!!:
"""+str(ermgency_removed)
tc.update_dataset(dataset_permaname="depmap-mutation-calls-9a1a",
upload_file_path_dict={'temp/depmap_'+release+'_mutation_calls.all': 'TableCSV',
'temp/damaging_mutation.all': 'NumericMatrixCSV',
'temp/other_mutation.all': 'NumericMatrixCSV',
'temp/hotspot_mutation.all': 'NumericMatrixCSV',
},#'temp/README': 'Raw'},
dataset_description=description)
# To add to a virtual dataset
AddToVirtual(virtual_public, 'depmap-mutation-calls-9a1a', [('CCLE_mutations', 'depmap_'+release+'_mutation_calls'),])#('README','README')])<jupyter_output>[('CCLE_mutations', 'depmap-mutation-calls-9a1a.22/depmap_20Q3_mutation_calls'), ('CCLE_gene_cn', 'depmap-wes-cn-data-97cc.34/public_20Q3_gene_cn'), ('Achilles_gene_effect_unscaled', 'avana-public-tentative-20q3-3e73.5/gene_effect_unscaled'), ('Achilles_high_variance_genes', 'avana-public-tentative-20q3-3e73.5/high_variance_genes'), ('Achilles_guide_efficacy', 'avana-public-tentative-20q3-3e73.5/guide_efficacy'), ('CCLE_fusions_unfiltered', 'gene-fusions-6212.14/unfiltered_fusions_20Q3'), ('common_essentials', 'avana-public-tentative-20q3-3e73.5/essential_genes'), ('Achilles_logfold_change_failures', 'avana-public-tentative-20q3-3e73.5/logfold_change_failures'), ('CCLE_expression', 'depmap-rnaseq-expression-data-ccd0.25/public_20Q3_proteincoding_tpm'), ('Achilles_raw_readcounts', 'avana-public-tentative-20q3-3e73.5/raw_readcounts'), ('Achilles_raw_readcounts_failures', 'avana-public-tentative-20q3-3e73.5/raw_readcounts_failing'), ('README', 'public-20q3-3d35.22/README'), ('CCLE_segment[...]
|
no_license
|
/WGS_CCLE.ipynb
|
FuChunjin/ccle_processing
| 28 |
<jupyter_start><jupyter_text>###### Importing libraries<jupyter_code>import pandas as pd
import numpy as np
import time
# matplotlib
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from matplotlib.colors import ListedColormap
#sklearn
from sklearn import datasets, svm, metrics,tree
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
#preprocessing
from sklearn.preprocessing import StandardScaler,normalize
# Dimenionality Reduction
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn import random_projection
#Feature selection
from sklearn.feature_selection import VarianceThreshold
#Under sampling
#from imblearn.under_sampling import RandomUnderSampler
#Over sampling
#from imblearn.over_sampling import SMOTE, BorderlineSMOTE, SVMSMOTE, SMOTENC,RandomOverSampler
#Combined sampling
#from imblearn.combine import SMOTETomek
#Algorithms
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import VotingClassifier,AdaBoostClassifier,RandomForestClassifier,ExtraTreesClassifier,GradientBoostingClassifier,GradientBoostingRegressor
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import BernoulliNB,GaussianNB
from sklearn.neural_network import MLPClassifier
from sklearn.linear_model import LogisticRegression,RidgeClassifier,Perceptron,PassiveAggressiveClassifier,RidgeClassifierCV
from sklearn.svm import SVC
from sklearn.decomposition import TruncatedSVD
from sklearn.utils import resample
from sklearn.pipeline import *
from sklearn import metrics
from sklearn.metrics import f1_score,confusion_matrix,classification_report,make_scorer,average_precision_score,precision_recall_curve
from sklearn.ensemble import BaggingClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
#import pandas_ml as pdml
import warnings
warnings.filterwarnings("ignore")
from scipy.stats import itemfreq
import importlib
from importlib import reload
from collections import defaultdict,Counter
from sklearn import preprocessing
%matplotlib inline
pd.options.display.max_columns=200
from sklearn.metrics import accuracy_score,f1_score
np.random.seed(42)
#Graphs
%matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
#import seaborn as sns;
import pickle<jupyter_output>Using matplotlib backend: MacOSX
<jupyter_text>###### Importing preprocessing file<jupyter_code>import Preprocessing as pp
#reload(pp)<jupyter_output>CRASH : (234866, 23)
VEHICLE : (474634, 17)
PEOPLE : (413775, 16)
['LANE_CNT', 'INTERSECTION_RELATED_I', 'NOT_RIGHT_OF_WAY_I', 'DOORING_I', 'WORK_ZONE_I', 'NUM_UNITS']
['UNIT_TYPE', 'VEHICLE_ID', 'CMRC_VEH_I', 'VEHICLE_YEAR', 'VEHICLE_DEFECT', 'VEHICLE_TYPE', 'VEHICLE_USE', 'MANEUVER', 'EXCEED_SPEED_LIMIT_I', 'CMV_ID', 'GVWR', 'TOTAL_VEHICLE_LENGTH', 'AXLE_CNT', 'VEHICLE_CONFIG']
['VEHICLE_ID', 'SEX', 'AGE', 'DRIVERS_LICENSE_CLASS', 'SAFETY_EQUIPMENT', 'DRIVER_ACTION', 'DRIVER_VISION', 'PHYSICAL_CONDITION', 'PEDPEDAL_ACTION', 'PEDPEDAL_VISIBILITY', 'PEDPEDAL_LOCATION', 'BAC_RESULT VALUE', 'CELL_PHONE_USE']
[]
[]
[]
Vpa rd_no : (234682, 1)
Vpa unit 1 : (234732, 31)
Vpa unit 2 : (234762, 61)
Vpa unit 3 : (234762, 91)
Vpa unit 4 : (234762, 121)
Vpa unit 5 : (234762, 151)
Vpa unit 6 : (234762, 181)
cvp with duplicates: (234641, 203)
cvp without duplicates: (234584, 203)
Dataframe size after adding dummies to crashes columns: (234584, 287)
Dataframe size after adding dummies to v[...]<jupyter_text>###### Splitting target variable from other variables<jupyter_code>df=pp.cvp_ohe
train_data=df.copy(deep=True)
train_data=train_data.drop(['PRIM_CONTRIBUTORY_CAUSE', 'SEC_CONTRIBUTORY_CAUSE'], axis=1)
train_labels=df[['PRIM_CONTRIBUTORY_CAUSE']].copy()
#ax = sns.heatmap(train_labels, vmin=0, vmax=1)<jupyter_output><empty_output><jupyter_text>###### Splitting to train and test
###### Scaling of data<jupyter_code>X_train, X_test, y_train, y_test = train_test_split(train_data, train_labels,test_size=0.20,stratify=train_labels,random_state=42)
scaler=StandardScaler()
X_train_scaled=scaler.fit_transform(X_train)
X_test_scaled=scaler.transform(X_test)<jupyter_output><empty_output><jupyter_text>### Algorithms Implementation# Gaussian Naive Bayes - Approach 1<jupyter_code>feature_selection=PCA(n_components=500)
clf=GaussianNB()
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_GNB1 = pipe_clf.predict(X_test_scaled)<jupyter_output><empty_output><jupyter_text>###### Performance Evaluation for Gaussian Naive Bayes - Approach 1<jupyter_code>print('********Gaussian Naive Bayes, Standard Scaled, PCA=500','********')
f1_score_micro_1_GNB1 = metrics.f1_score(y_test, y_pred_GNB1, average='micro')
f1_score_wt_1_GNB1 = metrics.f1_score(y_test,y_pred_GNB1,average='weighted')
precision_score_wt_1_GNB1 =metrics.precision_score(y_test, y_pred_GNB1, average='weighted')
recall_score_wt_1_GNB1 =metrics.recall_score(y_test, y_pred_GNB1, average='weighted')
print('F1-score_micro = ',f1_score_micro_1_GNB1)
print('F1-score = ',f1_score_wt_1_GNB1)
print('Precision = ',precision_score_wt_1_GNB1)
print('Recall Score = ',recall_score_wt_1_GNB1)
<jupyter_output>********Gaussian Naive Bayes, Standard Scaled, PCA=500 ********
F1-score_micro = 0.07123217597033059
F1-score = 0.10507073574861406
Precision = 0.3217250962687425
Recall Score = 0.07123217597033059
Stored 'f1_score_micro_1_GNB1' (float64)
Stored 'f1_score_wt_1_GNB1' (float64)
Stored 'precision_score_wt_1_GNB1' (float64)
Stored 'recall_score_wt_1_GNB1' (float64)
<jupyter_text># Gaussian Naive Bayes - Approach 2<jupyter_code>feature_selection=VarianceThreshold()
clf=GaussianNB()
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_GNB2 = pipe_clf.predict(X_test_scaled)<jupyter_output><empty_output><jupyter_text>###### Performance Evaluation for Gaussian Naive Bayes - Approach 2<jupyter_code>print('********Gaussian Naive Bayes, Standard Scaled, Variance threshold','********')
f1_score_micro_2_GNB2 = metrics.f1_score(y_test, y_pred_GNB2, average='micro')
f1_score_wt_2_GNB2 = metrics.f1_score(y_test,y_pred_GNB2,average='weighted')
precision_score_wt_2_GNB2 = metrics.precision_score(y_test, y_pred_GNB2, average='weighted')
recall_score_wt_2_GNB2 =metrics.recall_score(y_test, y_pred_GNB2, average='weighted')
print('F1-score_micro = ',f1_score_micro_2_GNB2)
print('F1-score = ',f1_score_wt_2_GNB2)
print('Precision = ',precision_score_wt_2_GNB2)
print('Recall Score = ',recall_score_wt_2_GNB2)
<jupyter_output>********Gaussian Naive Bayes, Standard Scaled, Variance threshold ********
F1-score_micro = 0.005456444359187501
F1-score = 0.0038950982391299923
Precision = 0.2084925981060008
Recall Score = 0.005456444359187501
Stored 'f1_score_micro_2_GNB2' (float64)
Stored 'f1_score_wt_2_GNB2' (float64)
Stored 'precision_score_wt_2_GNB2' (float64)
Stored 'recall_score_wt_2_GNB2' (float64)
<jupyter_text># Gaussian Naive Bayes - Approach 3<jupyter_code>feature_selection=TruncatedSVD(n_components=500)
clf=GaussianNB()
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_GNB3 = pipe_clf.predict(X_test_scaled)
<jupyter_output><empty_output><jupyter_text>#### Performance Evaluation for Gaussian Naive Bayes - Approach 3<jupyter_code>print('********Gaussian Naive Bayes, Standard Scaled, Truncated SVD','********')
f1_score_micro_2_GNB3 = metrics.f1_score(y_test, y_pred_GNB3, average='micro')
f1_score_wt_2_GNB3 = metrics.f1_score(y_test,y_pred_GNB3,average='weighted')
precision_score_wt_2_GNB3 = metrics.precision_score(y_test, y_pred_GNB3, average='weighted')
recall_score_wt_2_GNB3 = metrics.recall_score(y_test, y_pred_GNB3, average='weighted')
print('F1-score_micro = ',f1_score_micro_2_GNB3)
print('F1-score = ',f1_score_wt_2_GNB3)
print('Precision = ',precision_score_wt_2_GNB3)
print('Recall Score = ',recall_score_wt_2_GNB3)
<jupyter_output>********Gaussian Naive Bayes, Standard Scaled, Truncated SVD ********
F1-score_micro = 0.06944178016497218
F1-score = 0.10224143429953816
Precision = 0.3192906519134552
Recall Score = 0.06944178016497218
Stored 'f1_score_micro_2_GNB3' (float64)
Stored 'f1_score_wt_2_GNB3' (float64)
Stored 'precision_score_wt_2_GNB3' (float64)
Stored 'recall_score_wt_2_GNB3' (float64)
<jupyter_text># Algorithm - II # Logistic Regression - Approach 1<jupyter_code>feature_selection=PCA(n_components=500)
clf=LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial')
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_LR1 = pipe_clf.predict(X_test_scaled)<jupyter_output><empty_output><jupyter_text>#### Performance Evaluation for Logistic Regression - Approach 1<jupyter_code>print('********Logistic Regression, Standard Scaled, PCA=500','********')
f1_score_micro_1_LR1 = metrics.f1_score(y_test, y_pred_LR1, average='micro')
f1_score_wt_1_LR1 = metrics.f1_score(y_test,y_pred_LR1,average='weighted')
precision_score_wt_1_LR1 = metrics.precision_score(y_test, y_pred_LR1, average='weighted')
recall_score_wt_1_LR1 = metrics.recall_score(y_test, y_pred_LR1, average='weighted')
print('F1-score_micro = ',f1_score_micro_1_LR1)
print('F1-score = ',f1_score_wt_1_LR1)
print('Precision = ',precision_score_wt_1_LR1)
print('Recall Score = ',recall_score_wt_1_LR1)
<jupyter_output>********Logistic Regression, Standard Scaled, PCA=500 ********
F1-score_micro = 0.6232495683867255
F1-score = 0.580873794994892
Precision = 0.5805246278101315
Recall Score = 0.6232495683867255
Stored 'f1_score_micro_1_LR1' (float64)
Stored 'f1_score_wt_1_LR1' (float64)
Stored 'precision_score_wt_1_LR1' (float64)
Stored 'recall_score_wt_1_LR1' (float64)
<jupyter_text># Logistic Regression - Approach 2<jupyter_code>feature_selection=VarianceThreshold()
clf=LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial')
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_LR2 = pipe_clf.predict(X_test_scaled)<jupyter_output><empty_output><jupyter_text>#### Performance Evaluation for Logistic Regression - Approach 2<jupyter_code>print('********Logistic Regression, Standard Scaled, Variance threshold','********')
f1_score_micro_2_LR2 = metrics.f1_score(y_test, y_pred_LR2, average='micro')
f1_score_wt_2_LR2 = metrics.f1_score(y_test,y_pred_LR2,average='weighted')
precision_score_wt_2_LR2 = metrics.precision_score(y_test, y_pred_LR2, average='weighted')
recall_score_wt_2_LR2 = metrics.recall_score(y_test, y_pred_LR2, average='weighted')
print('F1-score_micro = ',f1_score_micro_2_LR2)
print('F1-score = ',f1_score_wt_2_LR2)
print('Precision = ',precision_score_wt_2_LR2)
print('Recall Score = ',recall_score_wt_2_LR2)
<jupyter_output>********Logistic Regression, Standard Scaled, Variance threshold ********
F1-score_micro = 0.6293454398192553
F1-score = 0.5898537709788344
Precision = 0.5899418346929847
Recall Score = 0.6293454398192553
Stored 'f1_score_micro_2_LR2' (float64)
Stored 'f1_score_wt_2_LR2' (float64)
Stored 'precision_score_wt_2_LR2' (float64)
Stored 'recall_score_wt_2_LR2' (float64)
<jupyter_text># Logistic Regression - Approach 3<jupyter_code>feature_selection=TruncatedSVD(n_components=500)
clf=LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial')
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_LR3 = pipe_clf.predict(X_test_scaled)<jupyter_output><empty_output><jupyter_text>#### Performance Evaluation for Logistic Regression - Approach 3<jupyter_code>print('********Logistic Regression, Standard Scaled, Truncated SVD','********')
f1_score_tsvd_micro_LR3 = metrics.f1_score(y_test, y_pred_LR3, average='micro')
f1_score_tsvd_wt_LR3 = metrics.f1_score(y_test,y_pred_LR3,average='weighted')
precision_score_tsvd_wt_LR3 = metrics.precision_score(y_test, y_pred_LR3, average='weighted')
recall_score_tsvd_wt_LR3 = metrics.recall_score(y_test, y_pred_LR3, average='weighted')
print('F1-score_micro = ',f1_score_tsvd_micro_LR3)
print('F1-score = ',f1_score_tsvd_wt_LR3)
print('Precision = ',precision_score_tsvd_wt_LR3)
print('Recall Score = ',recall_score_tsvd_wt_LR3)
<jupyter_output>********Logistic Regression, Standard Scaled, Truncated SVD ********
F1-score_micro = 0.6235905961591747
F1-score = 0.5817147236555426
Precision = 0.5815207708644762
Recall Score = 0.6235905961591747
Stored 'f1_score_tsvd_micro_LR3' (float64)
Stored 'f1_score_tsvd_wt_LR3' (float64)
Stored 'precision_score_tsvd_wt_LR3' (float64)
Stored 'recall_score_tsvd_wt_LR3' (float64)
<jupyter_text># Algorithm III# Random Forest - Approach 1<jupyter_code>feature_selection=PCA(n_components=500)
clf=RandomForestClassifier(n_estimators=100, random_state=100)
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_RF1 = pipe_clf.predict(X_test_scaled)<jupyter_output><empty_output><jupyter_text>#### Performance Evaluation for Random Forest - Approach 1<jupyter_code>print('********Random Forest, Standard Scaled, PCA=500','********')
f1_score_micro_1_RF1 = metrics.f1_score(y_test, y_pred_RF1, average='micro')
f1_score_wt_1_RF1 = metrics.f1_score(y_test,y_pred_RF1,average='weighted')
precision_score_wt_1_RF1 = metrics.precision_score(y_test, y_pred_RF1, average='weighted')
recall_score_wt_1_RF1 = metrics.recall_score(y_test, y_pred_RF1, average='weighted')
print('F1-score_micro = ',f1_score_micro_1_RF1)
print('F1-score = ',f1_score_wt_1_RF1)
print('Precision = ',precision_score_wt_1_RF1)
print('Recall Score = ',recall_score_wt_1_RF1)
<jupyter_output>********Random Forest, Standard Scaled, PCA=500 ********
F1-score_micro = 0.5659569026152568
F1-score = 0.5084327706642776
Precision = 0.5500017743952035
Recall Score = 0.5659569026152568
Stored 'f1_score_micro_1_RF1' (float64)
Stored 'f1_score_wt_1_RF1' (float64)
Stored 'precision_score_wt_1_RF1' (float64)
Stored 'recall_score_wt_1_RF1' (float64)
<jupyter_text># Random Forest - Approach 2<jupyter_code>feature_selection=VarianceThreshold()
clf=RandomForestClassifier(n_estimators=100, random_state=100)
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_RF2 = pipe_clf.predict(X_test_scaled)<jupyter_output><empty_output><jupyter_text>#### Performance Evaluation for Random Forest - Approach 2<jupyter_code>print('********Random Forest, Standard Scaled, Variance threshold','********')
f1_score_micro_2_RF2 = metrics.f1_score(y_test, y_pred_RF2, average='micro')
f1_score_wt_2_RF2 = metrics.f1_score(y_test,y_pred_RF2,average='weighted')
precision_score_wt_2_RF2 = metrics.precision_score(y_test, y_pred_RF2, average='weighted')
recall_score_wt_2_RF2 = metrics.recall_score(y_test, y_pred_RF2, average='weighted')
print('F1-score_micro = ',f1_score_micro_2_RF2)
print('F1-score = ',f1_score_wt_2_RF2)
print('Precision = ',precision_score_wt_2_RF2)
print('Recall Score = ',recall_score_wt_2_RF2)
<jupyter_output>********Random Forest, Standard Scaled, Variance threshold ********
F1-score_micro = 0.6396828441716222
F1-score = 0.591356582090985
Precision = 0.6195473528899941
Recall Score = 0.6396828441716222
Stored 'f1_score_micro_2_RF2' (float64)
Stored 'f1_score_wt_2_RF2' (float64)
Stored 'precision_score_wt_2_RF2' (float64)
Stored 'recall_score_wt_2_RF2' (float64)
<jupyter_text># Random Forest - Approach 3<jupyter_code>feature_selection=TruncatedSVD(n_components=500)
clf=RandomForestClassifier(n_estimators=100, random_state=100)
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_RF3 = pipe_clf.predict(X_test_scaled)<jupyter_output><empty_output><jupyter_text>#### Performance Evaluation for Random Forest - Approach 3<jupyter_code>print('********Random Forest, Standard Scaled, Variance threshold','********')
f1_score_micro_2_RF3 = metrics.f1_score(y_test, y_pred_RF3, average='micro')
f1_score_wt_2_RF3 = metrics.f1_score(y_test,y_pred_RF3,average='weighted')
precision_score_wt_2_RF3 = metrics.precision_score(y_test, y_pred_RF3, average='weighted')
recall_score_wt_2_RF3 = metrics.recall_score(y_test, y_pred_RF3, average='weighted')
print('F1-score_micro = ',f1_score_micro_2_RF3)
print('F1-score = ',f1_score_wt_2_RF3)
print('Precision = ',precision_score_wt_2_RF3)
print('Recall Score = ',recall_score_wt_2_RF3)
<jupyter_output>********Random Forest, Standard Scaled, Variance threshold ********
F1-score_micro = 0.5655945606070294
F1-score = 0.5073501551584017
Precision = 0.5453460888515417
Recall Score = 0.5655945606070294
Stored 'f1_score_micro_2_RF3' (float64)
Stored 'f1_score_wt_2_RF3' (float64)
Stored 'precision_score_wt_2_RF3' (float64)
Stored 'recall_score_wt_2_RF3' (float64)
<jupyter_text># Random Forest - Approach 4<jupyter_code>feature_selection=PCA(n_components=500)
clf=RandomForestClassifier(n_estimators=1000, max_depth=5, random_state=100, max_features=10)
pipe_clf = make_pipeline(feature_selection,clf)
pipe_clf.fit(X_train_scaled, y_train)
y_pred_RF4 = pipe_clf.predict(X_test_scaled)<jupyter_output><empty_output><jupyter_text>#### Performance Evaluation for Random Forest - Approach 4<jupyter_code>print('********Random Forest, Standard Scaled, PCA=500','********')
f1_score_micro_2_RF4 = metrics.f1_score(y_test, y_pred_RF4, average='micro')
f1_score_wt_2_RF4 = metrics.f1_score(y_test,y_pred_RF4,average='weighted')
precision_score_wt_2_RF4 = metrics.precision_score(y_test, y_pred_RF4, average='weighted')
recall_score_wt_2_RF4 = metrics.recall_score(y_test, y_pred_RF4, average='weighted')
print('F1-score_micro = ',f1_score_micro_2_RF4)
print('F1-score = ',f1_score_wt_2_RF4)
print('Precision = ',precision_score_wt_2_RF4)
print('Recall Score = ',recall_score_wt_2_RF4)
import pickle
algorithm_1 = {'f1_score_micro_1_GNB1':f1_score_micro_1_GNB1,
'f1_score_wt_1_GNB1': f1_score_wt_1_GNB1,
'precision_score_wt_1_GNB1': precision_score_wt_1_GNB1,
'recall_score_wt_1_GNB1':recall_score_wt_1_GNB1,
'f1_score_micro_2_GNB2':f1_score_micro_2_GNB2,
'f1_score_wt_2_GNB2':f1_score_wt_2_GNB2,
'precision_score_wt_2_GNB2':precision_score_wt_2_GNB2,
'recall_score_wt_2_GNB2':recall_score_wt_2_GNB2,
'f1_score_micro_2_GNB3':f1_score_micro_2_GNB3,
'f1_score_wt_2_GNB3':f1_score_wt_2_GNB3,
'precision_score_wt_2_GNB3':precision_score_wt_2_GNB3,
'recall_score_wt_2_GNB3':recall_score_wt_2_GNB3,
'f1_score_micro_1_LR1':f1_score_micro_1_LR1,
'f1_score_wt_1_LR1':f1_score_wt_1_LR1,
'precision_score_wt_1_LR1':precision_score_wt_1_LR1,
'recall_score_wt_1_LR1':recall_score_wt_1_LR1,
'f1_score_micro_2_LR2':f1_score_micro_2_LR2,
'f1_score_wt_2_LR2':f1_score_wt_2_LR2,
'precision_score_wt_2_LR2':precision_score_wt_2_LR2,
'recall_score_wt_2_LR2':recall_score_wt_2_LR2,
'f1_score_tsvd_micro_LR3':f1_score_tsvd_micro_LR3,
'f1_score_tsvd_wt_LR3':f1_score_tsvd_wt_LR3,
'precision_score_tsvd_wt_LR3':precision_score_tsvd_wt_LR3,
'recall_score_tsvd_wt_LR3':recall_score_tsvd_wt_LR3,
'f1_score_micro_1_RF1':f1_score_micro_1_RF1,
'f1_score_wt_1_RF1':f1_score_wt_1_RF1,
'precision_score_wt_1_RF1':precision_score_wt_1_RF1,
'recall_score_wt_1_RF1':recall_score_wt_1_RF1,
'f1_score_micro_2_RF2':f1_score_micro_2_RF2,
'f1_score_wt_2_RF2':f1_score_wt_2_RF2,
'precision_score_wt_2_RF2':precision_score_wt_2_RF2,
'recall_score_wt_2_RF2':recall_score_wt_2_RF2,
'f1_score_micro_2_RF3':f1_score_micro_2_RF3,
'f1_score_wt_2_RF3':f1_score_wt_2_RF3,
'precision_score_wt_2_RF3':precision_score_wt_2_RF3,
'recall_score_wt_2_RF3':recall_score_wt_2_RF3,
'f1_score_micro_2_RF4':f1_score_micro_2_RF4,
'f1_score_wt_2_RF4':f1_score_wt_2_RF4,
'precision_score_wt_2_RF4':precision_score_wt_2_RF4,
'recall_score_wt_2_RF4':recall_score_wt_2_RF4
}
pickle.dump(algorithm_1,open("algorithm_1.p","wb"))<jupyter_output><empty_output>
|
no_license
|
/Modified_Files/Algorithms_1.ipynb
|
ChaithralakshmiS/ChicagoTrafficCrash-PredictingContributoryFactory
| 24 |
<jupyter_start><jupyter_text>## 1. Tools for text processing
What are the most frequent words in Herman Melville's novel Moby Dick and how often do they occur?
In this notebook, we'll scrape the novel Moby Dick from the website Project Gutenberg (which contains a large corpus of books) using the Python package requests. Then we'll extract words from this web data using BeautifulSoup. Finally, we'll dive into analyzing the distribution of words using the Natural Language ToolKit (nltk).
The Data Science pipeline we'll build in this notebook can be used to visualize the word frequency distributions of any novel that you can find on Project Gutenberg. The natural language processing tools used here apply to much of the data that data scientists encounter as a vast proportion of the world's data is unstructured data and includes a great deal of text.
Let's start by loading in the three main python packages we are going to use.<jupyter_code># Importing requests, BeautifulSoup and nltk
# ... YOUR CODE FOR TASK 1 ...<jupyter_output><empty_output><jupyter_text>## 2. Request Moby Dick
To analyze Moby Dick, we need to get the contents of Moby Dick from somewhere. Luckily, the text is freely available online at Project Gutenberg as an HTML file: https://www.gutenberg.org/files/2701/2701-h/2701-h.htm .
Note that HTML stands for Hypertext Markup Language and is the standard markup language for the web.
To fetch the HTML file with Moby Dick we're going to use the request package to make a GET request for the website, which means we're getting data from it. This is what you're doing through a browser when visiting a webpage, but now we're getting the requested page directly into python instead. <jupyter_code># Getting the Moby Dick HTML
r = ...
# Setting the correct text encoding of the HTML page
r.encoding = 'utf-8'
# Extracting the HTML from the request object
html = ...
# Printing the first 2000 characters in html
# ... YOUR CODE FOR TASK 3 ...<jupyter_output><empty_output><jupyter_text>## 3. Get the text from the HTML
This HTML is not quite what we want. However, it does contain what we want: the text of Moby Dick. What we need to do now is wrangle this HTML to extract the text of the novel. For this we'll use the package BeautifulSoup.
Firstly, a word on the name of the package: Beautiful Soup? In web development, the term "tag soup" refers to structurally or syntactically incorrect HTML code written for a web page. What Beautiful Soup does best is to make tag soup beautiful again and to extract information from it with ease! In fact, the main object created and queried when using this package is called BeautifulSoup. After creating the soup, we can use its .get_text() method to extract the text.<jupyter_code># Creating a BeautifulSoup object from the HTML
soup = ...
# Getting the text out of the soup
text = ...
# Printing out text between characters 32000 and 34000
# ... YOUR CODE FOR TASK 3 ...<jupyter_output><empty_output><jupyter_text>## 4. Extract the words
We now have the text of the novel! There is some unwanted stuff at the start and some unwanted stuff at the end. We could remove it, but this content is so much smaller in amount than the text of Moby Dick that, to a first approximation, it is okay to leave it in.
Now that we have the text of interest, it's time to count how many times each word appears, and for this we'll use nltk – the Natural Language Toolkit. We'll start by tokenizing the text, that is, remove everything that isn't a word (whitespace, punctuation, etc.) and then split the text into a list of words.<jupyter_code># Creating a tokenizer
tokenizer = ...
# Tokenizing the text
tokens = ...
# Printing out the first 8 words / tokens
# ... YOUR CODE FOR TASK 4 ...<jupyter_output><empty_output><jupyter_text>## 5. Make the words lowercase
OK! We're nearly there. Note that in the above 'Or' has a capital 'O' and that in other places it may not, but both 'Or' and 'or' should be counted as the same word. For this reason, we should build a list of all words in Moby Dick in which all capital letters have been made lower case.<jupyter_code># A new list to hold the lowercased words
words = []
# Looping through the tokens and make them lower case
for word in tokens:
... # YOUR CODE FOR TASK 5 ...
# Printing out the first 8 words / tokens
# ... YOUR CODE FOR TASK 5 ...<jupyter_output><empty_output><jupyter_text>## 6. Load in stop words
It is common practice to remove words that appear a lot in the English language such as 'the', 'of' and 'a' because they're not so interesting. Such words are known as stop words. The package nltk includes a good list of stop words in English that we can use.<jupyter_code># Getting the English stop words from nltk
sw = ...
# Printing out the first eight stop words
# ... YOUR CODE FOR TASK 6 ...<jupyter_output><empty_output><jupyter_text>## 7. Remove stop words in Moby Dick
We now want to create a new list with all words in Moby Dick, except those that are stop words (that is, those words listed in sw). One way to get this list is to loop over all elements of words and add each word to a new list if they are not in sw.<jupyter_code># A new list to hold Moby Dick with No Stop words
words_ns = []
# Appending to words_ns all words that are in words but not in sw
for word in words:
... # YOUR CODE FOR TASK 7 ...
# Printing the first 5 words_ns to check that stop words are gone
# ... YOUR CODE FOR TASK 7 ...<jupyter_output><empty_output><jupyter_text>## 8. We have the answer
Our original question was:
What are the most frequent words in Herman Melville's novel Moby Dick and how often do they occur?
We are now ready to answer that! Let's create a word frequency distribution plot using nltk. <jupyter_code># This command display figures inline
%matplotlib inline
# Creating the word frequency distribution
freqdist = ...
# Plotting the word frequency distribution
# ... YOUR CODE FOR TASK 8 ...<jupyter_output><empty_output><jupyter_text>## 9. The most common word
Nice! The frequency distribution plot above is the answer to our question.
The natural language processing skills we used in this notebook are also applicable to much of the data that Data Scientists encounter as the vast proportion of the world's data is unstructured data and includes a great deal of text.
So, what word turned out to (not surprisingly) be the most common word in Moby Dick?<jupyter_code># What's the most common word in Moby Dick?
most_common_word = 'horse'<jupyter_output><empty_output>
|
no_license
|
/moby_dick_word_freq.ipynb
|
aditya9729/Important-projects
| 9 |
<jupyter_start><jupyter_text>Scrape the wikipedia page to get the contnets of the page<jupyter_code>url="https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
page_postalCanada=requests.get(url)
page_postalCanada<jupyter_output><empty_output><jupyter_text> Using BeautifulSoup first find the table and then fetch all contents in the tag tr to rows<jupyter_code>from bs4 import BeautifulSoup
soup=BeautifulSoup(page_postalCanada.content,'html.parser')
table_postalCanada=soup.find("table")
rows=table_postalCanada.find_all("tr")<jupyter_output><empty_output><jupyter_text>Transform the data into Pandas Dataframe<jupyter_code>postal_codes=pd.DataFrame([pt.get_text() for pt in rows])
print(postal_codes)<jupyter_output> 0
0 \nPostcode\nBorough\nNeighbourhood\n
1 \nM1A\nNot assigned\nNot assigned\n
2 \nM2A\nNot assigned\nNot assigned\n
3 \nM3A\nNorth York\nParkwoods\n
4 \nM4A\nNorth York\nVictoria Village\n
.. ...
284 \nM8Z\nEtobicoke\nMimico NW\n
285 \nM8Z\nEtobicoke\nThe Queensway West\n
286 \nM8Z\nEtobicoke\nRoyal York South West\n
287 \nM8Z\nEtobicoke\nSouth of Bloor\n
288 \nM9Z\nNot assigned\nNot assigned\n
[289 rows x 1 columns]
<jupyter_text>Format the data to the desired table form<jupyter_code>df=postal_codes[0].str.split("\n",n=4,expand=True)
df.drop(columns=4,inplace=True)
new_header=df.iloc[0]
df=df[1:]
df.columns=new_header
df.shape
df.head()<jupyter_output><empty_output><jupyter_text>Drop the rows where Borough is 'Not assigned'<jupyter_code>df=df[df.Borough != "Not assigned"]
df.shape
df.head()<jupyter_output><empty_output><jupyter_text>Combine the rows where more than one neighborhood exist in one postal code area.<jupyter_code>df_final=df.groupby("Postcode").agg(lambda x:','.join(set(x)))
df_final.reset_index()
df_final.shape
df_final.head()<jupyter_output><empty_output><jupyter_text>Assign the value of Borough to Neighborhood incase Borough has value but a "Not assigned" Neighbourhood<jupyter_code>df_final.Neighbourhood[df_final.Neighbourhood == "Not assigned"]=df_final.Borough
df_final.shape<jupyter_output><empty_output><jupyter_text>Assignment Part 2 Adding Latitude and Longitude to the Postal CodesReading Latitude and Longitude values from the Excel file<jupyter_code>import pandas as pd
df_lat_long=pd.read_csv("http://cocl.us/Geospatial_data")
df_lat_long.shape
df_merge=pd.merge(df_lat_long,df_final,left_on='Postal Code',right_on="Postcode")
df_merge
df_merge=df_merge.reindex(columns=['Postal Code','Borough','Neighbourhood','Latitude','Longitude'])
df_merge<jupyter_output><empty_output>
|
no_license
|
/Proj3Toronto_A2_Lat_Long.ipynb
|
keerthipattanath/Capstone_Segmenting_Clustering
| 8 |
<jupyter_start><jupyter_text>**This notebook is an exercise in the [Introduction to Machine Learning](https://www.kaggle.com/learn/intro-to-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/dansbecker/underfitting-and-overfitting).**
---
## Recap
You've built your first model, and now it's time to optimize the size of the tree to make better predictions. Run this cell to set up your coding environment where the previous step left off.<jupyter_code># Code you have previously used to load data
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
home_data = pd.read_csv(iowa_file_path)
# Create target object and call it y
y = home_data.SalePrice
# Create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]
# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# Specify Model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit Model
iowa_model.fit(train_X, train_y)
# Make validation predictions and calculate mean absolute error
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE: {:,.0f}".format(val_mae))
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex5 import *
print("\nSetup complete")<jupyter_output><empty_output><jupyter_text># Exercises
You could write the function `get_mae` yourself. For now, we'll supply it. This is the same function you read about in the previous lesson. Just run the cell below.<jupyter_code>def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y):
model = DecisionTreeRegressor(max_leaf_nodes=max_leaf_nodes, random_state=0)
model.fit(train_X, train_y)
preds_val = model.predict(val_X)
mae = mean_absolute_error(val_y, preds_val)
return(mae)<jupyter_output><empty_output><jupyter_text>## Step 1: Compare Different Tree Sizes
Write a loop that tries the following values for *max_leaf_nodes* from a set of possible values.
Call the *get_mae* function on each value of max_leaf_nodes. Store the output in some way that allows you to select the value of `max_leaf_nodes` that gives the most accurate model on your data.<jupyter_code>candidate_max_leaf_nodes = [5, 25, 50, 100, 250, 500]
# Write loop to find the ideal tree size from candidate_max_leaf_nodes
for max_leaf_nodes in candidate_max_leaf_nodes:
my_mae = get_mae(max_leaf_nodes,train_X,val_X,train_y,val_y)
print("Max leaf nodes: %d \t\t Mean Absolute Error: %d" %(max_leaf_nodes, my_mae))
# Store the best value of max_leaf_nodes (it will be either 5, 25, 50, 100, 250 or 500)
best_tree_size = 100
# Check your answer
step_1.check()
# The lines below will show you a hint or the solution.
# step_1.hint()
# step_1.solution()<jupyter_output><empty_output><jupyter_text>## Step 2: Fit Model Using All Data
You know the best tree size. If you were going to deploy this model in practice, you would make it even more accurate by using all of the data and keeping that tree size. That is, you don't need to hold out the validation data now that you've made all your modeling decisions.<jupyter_code># Fill in argument to make optimal size and uncomment
final_model = DecisionTreeRegressor(max_leaf_nodes=best_tree_size,random_state=1)
# fit the final model and uncomment the next two lines
final_model.fit(X,y)
# Check your answer
step_2.check()
# step_2.hint()
#step_2.solution()<jupyter_output><empty_output>
|
no_license
|
/exercise-underfitting-and-overfitting.ipynb
|
Satyam175/Intro-to-Machine-Learning-Course
| 4 |
<jupyter_start><jupyter_text>In the previous checkpoint, we saw that OLS pins down the coefficients of the linear regression model by minimizing the sum of the model's squared error terms. However, in order for estimated coefficients to be valid and test statistics associated with them to be reliable, some assumptions about the data and the model should be met. These assumptions are known as **Gauss Markov Assumptions** or **Gauss Markov Conditions**. In this checkpoint, we'll review these assumptions using our medical costs model from the previous checkpoint.
Before interpreting the estimated coefficients of a linear regression model, it's always a good idea to check whether the Gauss Markov assumptions hold. Otherwise, we need to try to fix our model. Sometimes this means applying a technique to solve for a specific problem. But usually, we need to change our model by including additional variables or excluding problematic ones. Once we have corrected our model, we can then re-estimate it using OLS and check whether or not the Gauss Markov conditions are met. As you will see, this is an iterative process.
**A remark on the exact number of Gauss Markov Conditions:** Don't get surprised if you see in some places that the number of Gauss Markov Conditions is four, five, or six! This is because some of the conditions can be derived from the others. For the sake of clarity, we'll introduce Gauss Markov Conditions in six bullets.
Here are the main topics we'll cover in this checkpoint:
* linearity of models in their coefficients
* the error term should be zero on average
* homoscedasticity
* low multicollinearity
* error terms should be uncorrelated with one another
* features shouldn't be correlated with the errors
* normality of the errors
This checkpoint ends with two assignments. First, you'll build a model using a weather dataset and check whether the Gauss Markov Assumptions hold or not. Second, you'll review your house price model from the previous checkpoint from the perspective of the Gauss Markov assumptions.## Our medical costs model
We'll use our medical costs model from the previous checkpoint to demonstrate Gauss Markov conditions. First, we need to import the relevant libraries and do the feature engineering steps. Then we will fit our model using OLS.<jupyter_code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import linear_model
import statsmodels.formula.api as smf
from sqlalchemy import create_engine
# Display preferences.
%matplotlib inline
pd.options.display.float_format = '{:.3f}'.format
import warnings
warnings.filterwarnings(action="ignore")
postgres_user = 'dsbc_student'
postgres_pw = '7*.8G9QH21'
postgres_host = '142.93.121.174'
postgres_port = '5432'
postgres_db = 'medicalcosts'
engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(
postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db))
insurance_df = pd.read_sql_query('select * from medicalcosts',con=engine)
# no need for an open connection, as we're only doing a single query
engine.dispose()
insurance_df.head(10)
insurance_df["is_male"] = pd.get_dummies(insurance_df.sex, drop_first=True)
insurance_df["is_smoker"] = pd.get_dummies(insurance_df.smoker, drop_first=True)
# Y is the target variable
Y = insurance_df['charges']
# X is the feature set which includes
# is_male and is_smoker variables
X = insurance_df[['is_male','is_smoker']]
# We create a LinearRegression model object
# from scikit-learn's linear_model module.
lrm = linear_model.LinearRegression()
# fit method estimates the coefficients using OLS
lrm.fit(X, Y)
# Inspect the results.
print('\nCoefficients: \n', lrm.coef_)
print('\nIntercept: \n', lrm.intercept_)<jupyter_output>
Coefficients:
[ -65.37868556 23622.13598049]
Intercept:
8466.035592512442
<jupyter_text>## Assumption one: linearity of the model in its coefficients
The first assumption that must be met is that the target variable should be a linear function of the model's __coefficients__. People often confuse this condition by incorrectly thinking that the relationship between target and features must be linear in the sense of being a straight line. But this need not be the case. The relationship could be quadratic or higher order. A model like (eq.1) below is completely valid:
$$ y = \beta_0 + \beta_1x_1 + \beta_2x_2^2 + \epsilon \qquad(eq.1)$$
As we mentioned earlier, linear regression modeling is quite flexible in capturing non-linear relationships between target and features. For example, in (eq.1), the relationship between the $y$ and $x$ is indeed quadratic. Below, we show how linear regression correctly estimates the intercept and the coefficients of the following model using synthetic data:
$$ y = 1 + 2x_1 + 3x_1^2 \qquad(eq.2)$$
<jupyter_code>df = pd.DataFrame()
# data from 0 to 999
df["X"] = np.arange(0,1000,1)
# we take the square of X
df["X_sq"] = df["X"]**2
# this is our equation: Y = 1 + 2*X + 3*X^2
df["Y"] = 1 + 2*df["X"] + 3*df["X_sq"]
# we fit a linear regression where target is Y
# and features are X and X^2
lrm_example = linear_model.LinearRegression()
lrm_example.fit(df[["X","X_sq"]],df["Y"])
# predictions of the linear regression
predictions = lrm_example.predict(df[["X","X_sq"]])
# we print the estimated coefficients
print('\nCoefficients: \n', lrm_example.coef_)
print('\nIntercept: \n', lrm_example.intercept_)
# we plot the estimated Y and X
# the relationship should be quadratic
plt.scatter(df["X"], predictions)
plt.xlabel("feature")
plt.ylabel("target")
plt.title('Linear regression can capture quadratic relationship')
plt.show()<jupyter_output>
Coefficients:
[2. 3.]
Intercept:
0.9999999997671694
<jupyter_text>As you can see, the linear regression model correctly estimated the true coefficients and captured the quadratic relationship between the target and the feature.
In contrast, a model like the one below is an invalid one as it violates the linearity assumption:
$$ y = \beta_0 + \beta_1x_1 + \beta_1^2x_1 + \epsilon \qquad(eq.2)$$
The relationship between the target $y$ and the coefficient $\beta_1$ is said to be non-linear because if we hold all independent variables $x$ and other coefficients constant, the graph of $y$ for changing $\beta_1$ is not a straight line.
In principle, this assumption is not related to estimation but to how we specify our model. So as long as we use models that take into account this linearity assumption as we did in our medical costs example, then we shouldn't worry about this assumption at all.## Assumption two: the error term should be zero on average
This second assumption states that the expected value of the error term should be zero. In mathematical terms:
$$\mathbb{E}(\epsilon) = 0$$
The $\mathbb{E}$ symbol indicates the expectation operator. We can read it as *"the average of the error terms should be equal to zero"*. The error term accounts for the variation in the target variable that is not explained by the features. So, ideally, the error term shouldn't explain anything in the variation of the target variable but instead should be determined randomly. If the expected value of the error is different than zero, our model would become biased! For example, if $\mathbb{E}(\epsilon) = -1$, then it means that our model systematically overpredicts the target variable.
This assumption is not held if you forget to include the constant term in your model. **This is why we said that you should always include a constant in your model**. As long as we include a constant in a model, we shouldn't be worried about this assumption as the constant will force the error terms to be zero on average.
In our medical costs model, we can see this happening:<jupyter_code>predictions = lrm.predict(X)
errors = Y - predictions
print("Mean of the errors in the medical costs model is: {}".format(np.mean(errors)))<jupyter_output>Mean of the errors in the medical costs model is: 8.891024438856429e-13
<jupyter_text>Since, we include the constant term in the model, the average of the model's error is effectively zero.## Assumption three: homoscedasticity
The third assumption is the requirement of **homoscedasticity**. A model is homoscedastic when the distribution of its error terms (known as "scedasticity") is consistent for all predicted values. In other words, the error variance shouldn't systematically change across observations. When this assumption is not met, we are dealing with **heteroscedasticity**.
For example, if our error terms aren't consistently distributed and you have more variance in the error for large outcome values than for small ones, then the confidence interval for large predicted values will be too small because it will be based on the average error variance. This leads to overconfidence in the accuracy of our model's predictions.
Let's checkout whether our medical costs model suffers from heteroscedasticity by visualizing it:<jupyter_code>plt.scatter(predictions, errors)
plt.xlabel('Predicted')
plt.ylabel('Residual')
plt.axhline(y=0)
plt.title('Residual vs. Predicted')
plt.show()<jupyter_output><empty_output><jupyter_text>It seems that error variance is higher for the higher values of the target variable. This implies that our error terms aren't homoscedastic. However, deriving conclusions from visuals is only an informal way of figuring out the problem. Thankfully, there are several formal statistical tests that we can use to determine whether there is heteroscedasticity in the error terms.
Here, we demonstrate two of them: **Bartlett** and **Levene** tests. The null hypothesis for both tests is that the errors are homoscedastic. Both tests can be imported from scipy's stats module.<jupyter_code>from scipy.stats import bartlett
from scipy.stats import levene
bart_stats = bartlett(predictions, errors)
lev_stats = levene(predictions, errors)
print("Bartlett test statistic value is {0:3g} and p value is {1:.3g}".format(bart_stats[0], bart_stats[1]))
print("Levene test statistic value is {0:3g} and p value is {1:.3g}".format(lev_stats[0], lev_stats[1]))<jupyter_output>Bartlett test statistic value is 78.9785 and p value is 6.28e-19
Levene test statistic value is 6.87294 and p value is 0.0088
<jupyter_text>The p-values of both tests are lower than 0.05. So, the test results reject the null hypothesis which means our errors are heteroscedastic.
There may be several causes of heteroscedasticity. Examples include outliers in the data and omitted variables that are important in explaining the variance of the target variable. Dealing with outliers and including relevant variables help to fix the heteroscedasticity problem. Some fixes to heteroscedasticity include transforming the dependent variable (see [Box Cox transformation](https://www.statisticshowto.datasciencecentral.com/box-cox-transformation/) and [log transformation](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4120293/)) and adding features that target the poorly-estimated areas.
If you are working on a dataset that has a limited number of features and your model suffers from heteroscedasticity because of omitted variables, then fixing this problem is not easy. However, keep in mind that even though your model suffers from the heteroscedasticity, the estimated coefficients are still valid (more formally, they are consistent but we will not discuss consistency of the estimations in this module. If you're interested, you can read about it in this [Wikipedia article](https://en.wikipedia.org/wiki/Consistent_estimator)). The only problem is with the reliability of some statistical tests like t-test. Heteroscedasticity may make some estimated coefficients seem to be statistically insignificant. We'll discuss statistical significance in the next checkpoint. ## Assumption four: low multicollinearity
Individual features should be only weakly correlated with one another, and ideally completely uncorrelated. When features are correlated, they may both explain the same pattern of variance in the outcome. The model will attempt to find a solution, potentially by attributing half the explanatory power to one feature and half to the other. This isn’t a problem if our only goal is prediction, because then all that matters is that the variance gets explained. However, if we want to know which features matter most when predicting an outcome, multicollinearity can cause us to underestimate the relationship between features and outcomes.
If there is correlation of 1 or -1 between a variable and another or several variables, this is called **perfect multicollinearity**. It is easy to understand perfect collinearity between two variables. But how can one variable be correlated with several variables? This happens when one variable is a linear combination of the others.
**A remark on dummy variables:** Caution is needed when working with dummy variables because of this linear combination issue. If we create some dummy variables from a categorical variable, then we need to exclude one of them from the model. This is because any one of those dummy variables can be represented as 1 minus the sum of the others. Hence a perfect multicollinearity occurs.
To detect multicollinearity, we can simply look at the correlation matrix of the features. Multicollinearity can be fixed by PCA or by discarding some of the correlated features.## Assumption five: error terms should be uncorrelated with one another
Error terms should be uncorrelated with one another. In other words, the error term for one observation shouldn't predict the error term for another. This type of serial correlation may happen if we omit a relevant variable from the model. So, including that variable into the model can solve for this issue.
To identify whether the error terms are correlated with each other or not, we can graph them. In the graph, we need to observe randomness.
Let's check our medical costs model's errors:<jupyter_code>plt.plot(errors)
plt.show()<jupyter_output><empty_output><jupyter_text>It seems that the error terms of our model are uncorrelated with each other.
Another way to look at correlations between errors is to use the **autocorrelation function**. This function computes the correlation of a variable with itself. In our case, the order of the errors are the orders of the observations. We can use the `acf()` function from statsmodels as follows: <jupyter_code>from statsmodels.tsa.stattools import acf
acf_data = acf(errors)
plt.plot(acf_data[1:])
plt.show()<jupyter_output><empty_output><jupyter_text>So, the autocorrelation between the errors of our medical costs model is indeed very low (ranging between -0.06 and 0.05).## Assumption six: features shouldn't be correlated with the errors
Last but definitely not least, and arguably the most important assumption: explanatory variables and errors should be independent. If this assumption doesn't hold, then the model's predictions will be unreliable as the estimates of the coefficients would be biased. This assumption is known as the **exogeneity**.
Violations of the exogeneity assumption may have several sources. Common causes are omitted variables and simultaneous causation between independent variables and the target. If the problem stems from simultaneous causation then we need to apply some advanced techniques to solve for the issue but this is beyond the scope of this bootcamp.## A very important remark on the normality of the errors
So far in this checkpoint, we've covered six assumptions for OLS regression. Another important thing to consider is the normality of the error terms. Although it is not an assumption of OLS, it still can impact our results. Specifically, normality of errors is not required to apply OLS to a linear regression model, but in order to measure the statistical significance of our estimated coefficients, error terms must be normally distributed. We'll cover t- and F-tests, which rest upon the normality of the errors in the next checkpoint.
More often than not, non-normally distributed errors stem from omitted variables. Including the omitted relevant features to the model may help fix the issue. Sometimes, transforming the dependent variable also helps.
There are various ways to check for normality of error terms. An informal way of doing this is by visualizing the errors in a QQ plot or to look at the histogram:<jupyter_code>rand_nums = np.random.normal(np.mean(errors), np.std(errors), len(errors))
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.scatter(np.sort(rand_nums), np.sort(errors)) # we sort the arrays
plt.xlabel("the normally distributed random variable")
plt.ylabel("errors of the model")
plt.title("QQ plot")
plt.subplot(1,2,2)
plt.hist(errors)
plt.xlabel("errors")
plt.title("Histogram of the errors")
plt.tight_layout()
plt.show()<jupyter_output><empty_output><jupyter_text>As can be seen in the charts above, our errors are not normally distributed exactly. But the QQ plot and the histogram imply that the distribution is not very far away from normal.
While visualizations give us a first impression about normality, the best way to learn about this is to apply formal statistical tests. To this end, we use two of them from scipy's stats module: **Jarque Bera** and **normal** tests. The null hypothesis of both tests is that the errors are normally distributed.
Let's use these tests to find out whether our error terms are normally distributed or not:<jupyter_code>from scipy.stats import jarque_bera
from scipy.stats import normaltest
jb_stats = jarque_bera(errors)
norm_stats = normaltest(errors)
print("Jarque-Bera test statistics is {0} and p value is {1}".format(jb_stats[0], jb_stats[1]))
print("Normality test statistics is {0} and p value is {1}".format(norm_stats[0], norm_stats[1]))<jupyter_output>Jarque-Bera test statistics is 211.89696216982082 and p value is 0.0
Normality test statistics is 135.84198399398656 and p value is 3.1789812786044006e-30
|
no_license
|
/Supervised Learning, Solving Regression Problems/Lecture/3.assumptions_of_linear_regression.ipynb
|
ltq477/Thinkful
| 9 |
<jupyter_start><jupyter_text># Pricing and Risk Calculation - PortfoliosPortfolios allow for efficient pricing and risk. The same principles in the in the basic risk and pricing tutorials can be applied to portfolios. Pricing and risks can be viewed for an individual instrument or at the aggregate portfolio level<jupyter_code>from gs_quant.session import Environment, GsSession
from gs_quant.common import PayReceive
from gs_quant.instrument import IRSwaption
import gs_quant.risk as risk
import datetime as dt
# external users should substitude their client id and secret; please skip this step if using internal jupyterhub
GsSession.use(Environment.PROD, client_id=None, client_secret=None, scopes=('run_analytics',))
swaption1 = IRSwaption(PayReceive.Receive, '5y', 'EUR',
expiration_date='3m', name='EUR3m5y')
swaption2 = IRSwaption(PayReceive.Receive, '5y', 'EUR',
expiration_date='6m', name='EUR6m5y')
# Create Portfolio w/ swaptions
from gs_quant.markets.portfolio import Portfolio
portfolio = Portfolio((swaption1, swaption2))
portfolio.resolve()
# Calculate Risk for Portfolio
port_risk = portfolio.calc(
(risk.DollarPrice, risk.IRDeltaParallel, risk.IRVega))
# View Instrument Level risk for portfolio
port_risk[risk.IRDeltaParallel]['EUR3m5y']
# View Aggregated Risk for a Portfolio
port_risk[risk.IRDeltaParallel].aggregate()
# Calculate Portfolio Risks within HistoricalPricingContext
from gs_quant.markets import HistoricalPricingContext
start_date = dt.date(2019, 6, 3)
end_date = dt.date(2019, 11, 18)
with HistoricalPricingContext(start_date, end_date):
port_ird_res = portfolio.calc(risk.IRDeltaParallelLocalCcy)
port_ird_agg = port_ird_res[risk.IRDeltaParallelLocalCcy].aggregate()
swap1_ird = port_ird_res[risk.IRDeltaParallelLocalCcy]['EUR3m5y']
swap2_ird = port_ird_res[risk.IRDeltaParallelLocalCcy]['EUR6m5y']
import matplotlib.pyplot as plt
plt.figure(figsize=(7, 5))
plt.xlabel('Date')
ax1 = port_ird_agg.plot(label='Portfolio')
swap1_ird.plot(label='EUR3m5y')
swap2_ird.plot(label='EYR6m5y')
h1, l1 = ax1.get_legend_handles_labels()
plt.title('IRDeltaParallel Timeseries')
plt.legend(h1, l1, loc=2)
plt.show()<jupyter_output><empty_output>
|
permissive
|
/gs_quant/tutorials/2_portfolios.ipynb
|
ahmedriza/gs-quant
| 1 |
<jupyter_start><jupyter_text><jupyter_code>import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Load the diabetes dataset
diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True)
print (diabetes_X)
# Split the data into training/testing sets
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes_y[:-20]
diabetes_y_test = diabetes_y[-20:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
# Make predictions using the training set
diabetes_y_pred = regr.predict(diabetes_X_train)
# Make predictions using the testing set
diabetes_y_pred = regr.predict(diabetes_X_test)
# Make predictions using the testing set
diabetes_y_pred = regr.predict(diabetes_X_test)
# Use score method to get accuracy of model
score = regr.score(diabetes_X_test, diabetes_y_test)
print(score)
# Linear Regression using single parameter
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Load the diabetes dataset
diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True)
# Use only one feature
diabetes_X = diabetes_X[:, np.newaxis, 2]
# Split the data into training/testing sets
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes_y[:-20]
diabetes_y_test = diabetes_y[-20:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
plt.scatter(diabetes_X_train, diabetes_y_train, color='red')
# Make predictions using the testing set
diabetes_y_pred = regr.predict(diabetes_X_train)
plt.plot(diabetes_X_train, diabetes_y_pred, color='green')
plt.show()
# Make predictions using the testing set
diabetes_y_pred = regr.predict(diabetes_X_test)
# Plot outputs
plt.scatter(diabetes_X_test, diabetes_y_test, color='black')
plt.plot(diabetes_X_test, diabetes_y_pred, color='blue', linewidth=3)
plt.show()
# Sigmoid Function or Logistic Function Characteristic
x = np.linspace(-5,+5,200)
y = 1 / (1 + np.exp(-x))
plt.plot(40+x, y)
plt.xlabel('size of tumour')
plt.ylabel('malignant or benign')
plt.show()
from sklearn.datasets import load_digits
digits = load_digits()
# Print to show there are 1797 images (8 by 8 images for a dimensionality of 64)
print("Image Data Shape" , digits.data.shape)
# Print to show there are 1797 labels (integers from 0-9)
print("Label Data Shape", digits.target.shape)
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(20,4))
for index, (image, label) in enumerate(zip(digits.data[0:5], digits.target[0:5])):
plt.subplot(1, 5, index + 1)
plt.imshow(np.reshape(image, (8,8)), cmap=plt.cm.gray)
plt.title('Training: %i\n' % label, fontsize = 20)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(digits.data, digits.target, test_size=0.25, random_state=0)
from sklearn.linear_model import LogisticRegression
logisticRegr = LogisticRegression()
logisticRegr.fit(x_train, y_train)
# Returns a NumPy Array
# Predict for One Observation (image)
logisticRegr.predict(x_test[0].reshape(1,-1))
# Predict for Multiple Observations (images) at Once
logisticRegr.predict(x_test[0:10])
# Make predictions on entire test data
predictions = logisticRegr.predict(x_test)
# Use score method to get accuracy of model
score = logisticRegr.score(x_test, y_test)
print(score)
<jupyter_output>Image Data Shape (1797, 64)
Label Data Shape (1797,)
|
no_license
|
/Untitled2.ipynb
|
suryaprakash9143/AITTA
| 1 |
<jupyter_start><jupyter_text>## 实验知识点
* Min-Max 标准化
* Z-Score 标准化
* 独热编码
* 数据离散化### 特征工程概述
数据挖掘分析,除了对已有数据的统计归纳之外,更重要的往往是通过建立模型预测,从而得到更多的信息。前面的内容中,我们已经学习到了如何采集数据,以及对数据进行清洁和预处理。完成这些工作的目的,就是为了得到合适的数据,从而建立机器学习模型。关于什么是机器学习算法,我们将在后面的内容中深入讨论?本次实验中,我们还是要进一步讨论如何得到「合适的数据」。建立机器学习分析预测模型,简单来讲就是将「数据」交给算法处理,让机器学习算法学习到合适的「参数」,并最终保存为可以使用的「模型」。此时,输入给算法的数据显然是非常重要的。想要预测模型性能达到最佳,一般来讲,不仅是要选取最好的算法,还要尽可能地输入更好的数据。这里的更好,通俗来讲就是想让数据对于模型的训练都是有用的,而不包含无效的冗余数据对提升模型性能没有太多帮助。那么,如何从已有的数据集中得到包含更多有效信息,更好的数据呢?这就要用到「特征工程」的相关方法。特征工程(英文:Feature Engineering)是数据分析中非常重要的一项工作。简单来讲,特征工程就是需要我们去设计数据特征,以帮助训练得到性能更加优异的模型。特征工程往往会在数据预处理的后面阶段实施,也就是接下来要介绍的数据预处理中的数据转换和规约环节。特征工程并不是数据分析过程中的必需操作,但应用特征工程往往能得到更加优异的结果,所以是必须要了解学习的内容。纵观 Kaggle 等国内外的数据分析比赛,成绩较好的队伍并不是应用了非常独特的算法(也就是大家常用的机器学习算法),而关键在于特征工程做得非常好。## 数据标准化
标准化(无量纲化)是数据预处理中的常用手段。标准化的目的主要是消除不同特征之间的量纲和取值范围不同造成的差异。这些差异,不仅会造成数据偏重不均,还会在可视化方面造成困扰。<jupyter_code>import numpy as np
import pandas as pd
%matplotlib inline
np.random.seed(10) # 随机数种子
df = pd.DataFrame({'A': np.random.random(
20), 'B': np.random.random(20) * 10000})
### 两个数据的差距太大
df.plot()<jupyter_output><empty_output><jupyter_text>## Z-Score 标准化
Z-Score 标准化是常用的标准化手段之一,其公式为:
$$\hat x=\frac{x-\mu}{\sigma}$$
其中,$\mu$ 为样本数据的均值,$\sigma$ 为样本数据的标准差。Z-Score 标准化之后的数据的均值为 0,方差为 1。下面使用 Z-Score 标准化对上面的 DataFrame 进行标准化处理并绘图。<jupyter_code>df_z_score = (df - df.mean()) / df.std() # Z-Score 标准化
df_z_score.plot()<jupyter_output><empty_output><jupyter_text>Z-Score 标准化方法在 Scipy 中有一个对应的 API `scipy.stats.zscore` 可供使用:<jupyter_code>from scipy import stats
stats.zscore(df)<jupyter_output><empty_output><jupyter_text>你可能会发现,为什么这里计算得到的结果与上面公式计算的结果不一致。其实这是计算精度的问题,`scipy.stats.zscore` 会使用 DataFrame 包含的 NumPy 数组进行运算,NumPy 数组的精度更高。你也可以通过下面的代码重新计算:<jupyter_code>(df.values - df.values.mean(axis=0)) / df.values.std(axis=0) # 将 DataFrame 处理从 NumPy 数组再运算<jupyter_output><empty_output><jupyter_text>## Min-Max 标准化Min-Max 标准化同样是常用手段,其效果类似于区间缩放,可以将数值缩放到 0-1 之间。其公式为:
$$\hat x=\frac{x-x_{min}}{x_{max}-x_{min}}$$
其中,$x_{max}$ 为样本数据的最大值,$x_{min}$ 为样本数据的最小值,$x_{max}-x_{min}$ 为极差。<jupyter_code>df_min_max = (df - df.min())/(df.max() - df.min())
df_min_max.plot()<jupyter_output><empty_output><jupyter_text>同样,scikit-learn 也提供了 Min-Max 标准化的 API `sklearn.preprocessing.MinMaxScaler()`,使用方法如下:<jupyter_code>from sklearn.preprocessing import MinMaxScaler
MinMaxScaler().fit_transform(df)<jupyter_output><empty_output><jupyter_text>## 独热编码在对数据的预处理过程中,我们会遇到有一些特征列中的样本并不是连续存在的,而是以分类形式存在的情况。例如,某一装置的状态有三种情况,分别为:正常、机械故障、电路故障。如果我们要将这些数据运用到后续的预测分析中,就需要对文字状态进行转换。一般情况下,可以用 0 表示正常,1 代表机械故障,2 代表电路故障。
但是这种映射方式,往往会让学习器认为 2 代表电路故障比 1 代表机械故障更「大」,从而影响模型分析结果,这肯定是不行的。
所以,对于以分类形式存在的特征变量,我们会采用一种叫独热编码(One-Hot Encoding)的方式将其转换成二元特征编码,进一步对特征进行了稀疏处理。独热编码采用位状态寄存器来对个状态进行编码,每个状态都由它独立的寄存器位,并且在任意时候只有一位有效。| 自然状态码 | 独热编码 |
|:----------:|:--------:|
| 000 | 000001 |
| 001 | 000010 |
| 010 | 000100 |
| 011 | 001000 |
| 100 | 010000 |
| 101 | 100000 |Pandas 中,我们可以使用 `get_dummies` 很方便地完成独热编码。<jupyter_code>df = pd.DataFrame({'fruits': ['apple', 'banana', 'pineapple']*2}) # 示例装置状态表
df
pd.get_dummies(df)<jupyter_output><empty_output><jupyter_text>## 数据离散化数据离散化一般特指将连续数据离散化,例如我们统计了一个包含「年龄」的数据集,这些年龄都是以数值型数据呈现,可以被看作是连续数据。如果,我们按照年龄段对数据进行分割,1-10 岁为少儿,11-20 岁为少年,21-30 岁为青年等,这就可以被看作是数据离散化的过程。数据离散化有时候是为了算法实施需要,也有可能离散数据更适合数据的信息表达。当我们对连续数据进行按区间离散化时,你可以通过编写代码实现。不过,这里介绍 Pandas 中一个非常方便的区间离散化方法 `pd.cut`。例如,我们将数组 `[1, 2, 7, 8, 5, 4, 12, 6, 3]` 等间距分割为 3 部分:<jupyter_code>pd.cut(np.array([1, 2, 7, 8, 5, 4, 12, 6, 3]), bins=3) # bins 指定划分数量<jupyter_output><empty_output><jupyter_text>上面的返回值中,首先关注 `Categories (3, interval[float64]): [(0.989, 4.667] < (4.667, 8.333] < (8.333, 12.0]]` 给出的 3 个区间范围,也就是由数组确定的 3 个等间距区间。返回值的前半部分,给出了数组中每个数所处划分区间的列表。此时,如果我们按照 3 个区间对数据添加类别标签 `"small", "medium", "large"`,只需要指定 `labels=` 参数即可:<jupyter_code>pd.cut(np.array([1, 2, 7, 8, 5, 4, 12, 6, 3]),
bins=3, labels=["small", "medium", "large"])<jupyter_output><empty_output><jupyter_text>一般情况下,区间返回会以最大值为准,向最小值方向扩展 0.1% 以保证元素被有效分割。所以上面的区间不是以最小值 1 开始,而是 0.989。其中:$$ 1 - 0.989 = (12 - 1) * 0.1\% $$
当然,你可以自行指定划分区间:<jupyter_code>pd.cut(np.array([1, 2, 7, 8, 5, 4, 12, 6, 3]),
bins=[0, 5, 10, 15], labels=["small", "medium", "large"])<jupyter_output><empty_output>
|
no_license
|
/week2/data_transformation.ipynb
|
sc16rl/shiyan
| 10 |
<jupyter_start><jupyter_text># Generative Adversarial Networks
**Generative Adversarial Networks** or GANs - use neural networks for Generative modeling.
> **Generative modeling** is an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset.
While there are many approaches used for generative modeling, a Generative Adverserial Network takes the following approach:

There are two neural networks: a **Generator** and a **Discriminator**. The generator generates a "fake" sample given a random input vector/matrix, and the discriminator attempts to detect whether a given sample is "real" (picked from the training data) or "fake" (generated by the generator). Training happens in tandem: we train the discriminator for a few epochs, then train the generator for a few epochs, and repeat. This way both the generator and the discriminator get better at doing their job.
GANs however, can be notoriously difficult to train, and are extremely sensitive to hyperparameters, activation functions and regularization.
We'll train a GAN to generate images of handwritten digits similar to those from the MNIST database.
* Define the problem statement
* Load the data (with transforms and normalization)
* Denormalize for visual inspection of samples
* Define the Discriminator network
* Study the activation function: Leaky ReLU
* Define the Generator network
* Explain the output activation function: tanh
* Look at some sample outputs
* Define losses, optimizers and helper functions for training
* For discriminator
* For generator
* Train the model
* Save intermediate generated images to file
* Look at outputs
* Save the models## Load the Data
Download and import the data as a PyTorch dataset using the `MNIST` helper class from `torchvision.datasets`.<jupyter_code>import torch
import torchvision
from torchvision.transforms import ToTensor, Normalize, Compose
from torchvision.datasets import MNIST
mnist = MNIST(root='data',
train=True,
download=True,
transform=Compose([ToTensor(), Normalize(mean=(0.5,), std=(0.5,))]))<jupyter_output>Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to data/MNIST/raw/train-images-idx3-ubyte.gz
<jupyter_text>We are are transforming the pixel values from the range `[0, 1]` to the range `[-1, 1]`. Let's look at a sample tensor from the data.<jupyter_code>img, label = mnist[0]
print('Label: ', label)
print(img[:,10:15,10:15])
torch.min(img), torch.max(img)<jupyter_output>Label: 5
tensor([[[-0.9922, 0.2078, 0.9843, -0.2941, -1.0000],
[-1.0000, 0.0902, 0.9843, 0.4902, -0.9843],
[-1.0000, -0.9137, 0.4902, 0.9843, -0.4510],
[-1.0000, -1.0000, -0.7255, 0.8902, 0.7647],
[-1.0000, -1.0000, -1.0000, -0.3647, 0.8824]]])
<jupyter_text>The pixel values are in the range (-1,1). We will now define a helper function to denormalize and view the images. We will also use this for viewing the generated images.<jupyter_code>def denorm(x):
out = (x + 1) / 2
return out.clamp(0, 1)
import matplotlib.pyplot as plt
%matplotlib inline
img_norm = denorm(img)
plt.imshow(img_norm[0], cmap='gray')
print('Label:', label)<jupyter_output>Label: 5
<jupyter_text>Create a dataloader to load the images in batches.<jupyter_code>from torch.utils.data import DataLoader
batch_size = 100
data_loader = DataLoader(mnist, batch_size, shuffle=True)
for img_batch, label_batch in data_loader:
print('first batch')
print(img_batch.shape)
plt.imshow(img_batch[0][0], cmap='gray')
print(label_batch)
break
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device<jupyter_output><empty_output><jupyter_text>## Discriminator Network
The Discriminator takes an image as input, and tries to classify it as "real" or "generated/fake". It's like any other neural network - while we can use a CNN for the discriminator, we'll use a simple feedforward network with 3 linear layers.
Each 28x28 image is viewed as a vector of size 784.<jupyter_code>image_size = 784
hidden_size = 256
import torch.nn as nn
D = nn.Sequential(
nn.Linear(image_size, hidden_size),
nn.LeakyReLU(0.2),
nn.Linear(hidden_size, hidden_size),
nn.LeakyReLU(0.2),
nn.Linear(hidden_size, 1),
nn.Sigmoid())<jupyter_output><empty_output><jupyter_text>We are using the Leaky ReLU activation for the discriminator.
We need to move the discriminator model to the device.<jupyter_code>D.to(device)<jupyter_output><empty_output><jupyter_text>## Generator Network
The input to the generator is typically a vector or a matrix which is used as a seed for generating an image. We'll use a feed-forward neural network with 3 layers, and the output will be a vector of size 784, which can be transformed to a 28x28 px image.<jupyter_code>latent_size = 64
G = nn.Sequential(
nn.Linear(latent_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, image_size),
nn.Tanh())<jupyter_output><empty_output><jupyter_text>We are using the **tanh** activation function in the generator.
[Reference](https://stackoverflow.com/questions/41489907/generative-adversarial-networks-tanh)
> "The ReLU activation (Nair & Hinton, 2010) is used in the generator with the exception of the output layer which uses the Tanh function. We observed that using a bounded activation allowed the model to learn more quickly to saturate and cover the color space of the training distribution. Within the discriminator we found the leaky rectified activation (Maas et al., 2013) (Xu et al., 2015) to work well, especially for higher resolution modeling."<jupyter_code>y = G(torch.randn(2, latent_size))
gen_imgs = denorm(y.reshape((-1, 28,28)).detach())
plt.imshow(gen_imgs[0], cmap='gray');
plt.imshow(gen_imgs[1], cmap='gray');
G.to(device);<jupyter_output><empty_output><jupyter_text>## Discriminator Training
Since the discriminator is a binary classification model, we can use the binary cross entropy (**BCE**) loss function to quantify how well it is able to differentiate between real and generated images.<jupyter_code>criterion = nn.BCELoss()
d_optimizer = torch.optim.Adam(D.parameters(), lr=0.0002)<jupyter_output><empty_output><jupyter_text>Define helper functions to reset gradients and to train the discriminator.<jupyter_code>def reset_grad():
d_optimizer.zero_grad()
g_optimizer.zero_grad()
def train_discriminator(images):
# Create the labels which are later used as input for the BCE loss
real_labels = torch.ones(batch_size, 1).to(device)
fake_labels = torch.zeros(batch_size, 1).to(device)
# Loss for real images
outputs = D(images)
d_loss_real = criterion(outputs, real_labels)
real_score = outputs
# Loss for fake images
z = torch.randn(batch_size, latent_size).to(device)
fake_images = G(z)
outputs = D(fake_images)
d_loss_fake = criterion(outputs, fake_labels)
fake_score = outputs
# Combine losses
d_loss = d_loss_real + d_loss_fake
# Reset gradients
reset_grad()
# Compute gradients
d_loss.backward()
# Adjust the parameters using backprop
d_optimizer.step()
return d_loss, real_score, fake_score<jupyter_output><empty_output><jupyter_text>- We expect the discriminator to output 1 if the image was picked from the real MNIST dataset, and 0 if it was generated.
- We first pass a batch of real images, and compute the loss, setting the target labels to 1.
- Then, we generate a batch of fake images using the generator, pass them into the discriminator, and compute the loss, setting the target labels to 0.
- Finally we add the two losses and use the overall loss to perform gradient descent to adjust the weights of the discriminator.
We don't change the weights of the generator model while training the discriminator and vice versa.## Generator Training
In training the generator, we use the discriminator as a part of the loss function:
- We generate a batch of images using the generator, pass the into the discriminator.
- We calculate the loss by setting the target labels to 1 i.e. real. We do this because the generator's objective is to "fool" the discriminator i.e., generator has to generate such images which the discriminator thinks are real.
- We use the loss to perform gradient descent i.e. change the weights of the generator model, and it gets better at generating real-like images.
<jupyter_code>g_optimizer = torch.optim.Adam(G.parameters(), lr=0.0002)
def train_generator():
# Generate fake images and calculate loss
z = torch.randn(batch_size, latent_size).to(device)
fake_images = G(z)
labels = torch.ones(batch_size, 1).to(device)
g_loss = criterion(D(fake_images), labels)
# Backprop and optimize
reset_grad()
g_loss.backward()
g_optimizer.step()
return g_loss, fake_images<jupyter_output><empty_output><jupyter_text>## Training the Model
Create a directory where we can save intermediate outputs from the generator to visually inspect the progress of the model.<jupyter_code>import os
sample_dir = 'samples'
if not os.path.exists(sample_dir):
os.makedirs(sample_dir)<jupyter_output><empty_output><jupyter_text>Save a batch of real images that we can use for visual comparision while looking at the generated images.<jupyter_code>from IPython.display import Image
from torchvision.utils import save_image
# Save some real images
for images, _ in data_loader:
images = images.reshape(images.size(0), 1, 28, 28)
save_image(denorm(images), os.path.join(sample_dir, 'real_images.png'), nrow=10)
break
Image(os.path.join(sample_dir, 'real_images.png'))<jupyter_output><empty_output><jupyter_text>Define a helper function to save a batch of generated images to disk at the end of every epoch. We'll use a fixed set of input vectors to the generator, to see how the individual generated images evolve over time as we train the model.<jupyter_code>sample_vectors = torch.randn(batch_size, latent_size).to(device)
def save_fake_images(index):
fake_images = G(sample_vectors)
fake_images = fake_images.reshape(fake_images.size(0), 1, 28, 28)
fake_fname = 'fake_images-{0:0=4d}.png'.format(index)
print('Saving', fake_fname)
save_image(denorm(fake_images), os.path.join(sample_dir, fake_fname), nrow=10)
# Before training
save_fake_images(0)
Image(os.path.join(sample_dir, 'fake_images-0000.png'))<jupyter_output>Saving fake_images-0000.png
<jupyter_text>We now train the model. In each epoch, we train the discriminator first, and then the generator.<jupyter_code>%%time
num_epochs = 300
total_step = len(data_loader)
d_losses, g_losses, real_scores, fake_scores = [], [], [], []
for epoch in range(num_epochs):
for i, (images, _) in enumerate(data_loader):
# Load a batch & transform to vectors
images = images.reshape(batch_size, -1).to(device)
# Train the discriminator and generator
d_loss, real_score, fake_score = train_discriminator(images)
g_loss, fake_images = train_generator()
# Inspect the losses
if (i+1) % 200 == 0:
d_losses.append(d_loss.item())
g_losses.append(g_loss.item())
real_scores.append(real_score.mean().item())
fake_scores.append(fake_score.mean().item())
print('Epoch [{}/{}], Step [{}/{}], d_loss: {:.4f}, g_loss: {:.4f}, D(x): {:.2f}, D(G(z)): {:.2f}'
.format(epoch, num_epochs, i+1, total_step, d_loss.item(), g_loss.item(),
real_score.mean().item(), fake_score.mean().item()))
# Sample and save images
save_fake_images(epoch+1)
# Save the model checkpoints
torch.save(G.state_dict(), 'G.ckpt')
torch.save(D.state_dict(), 'D.ckpt')
Image('./samples/fake_images-0010.png')
Image('./samples/fake_images-0050.png')
Image('./samples/fake_images-0100.png')
Image('./samples/fake_images-0200.png')
Image('./samples/fake_images-0300.png')<jupyter_output><empty_output><jupyter_text>We can visualize the training process by combining the sample images generated after each epoch into a video using OpenCV.<jupyter_code>import cv2
import os
from IPython.display import FileLink
vid_fname = 'gans_training.avi'
files = [os.path.join(sample_dir, f) for f in os.listdir(sample_dir) if 'fake_images' in f]
files.sort()
out = cv2.VideoWriter(vid_fname,cv2.VideoWriter_fourcc(*'MP4V'), 8, (302,302))
[out.write(cv2.imread(fname)) for fname in files]
out.release()
FileLink('gans_training.avi')<jupyter_output><empty_output><jupyter_text>We can also visualize how the loss changes over time. Visualizing losses is quite useful for debugging the training process. For GANs, we expect the generator's loss to reduce over time, without the discriminator's loss getting too high.<jupyter_code>plt.plot(d_losses, '-')
plt.plot(g_losses, '-')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['Discriminator', 'Generator'])
plt.title('Losses');
plt.plot(real_scores, '-')
plt.plot(fake_scores, '-')
plt.xlabel('epoch')
plt.ylabel('score')
plt.legend(['Real Score', 'Fake score'])
plt.title('Scores');<jupyter_output><empty_output>
|
no_license
|
/GAN/GAN_MNIST.ipynb
|
Jayanth2209/PyTorch_Learning
| 17 |
<jupyter_start><jupyter_text># my1stNN.ipynb (or MNIST digits classification with TensorFlow)### This task will be submitted for peer review, so make sure it contains all the necessary outputs!<jupyter_code>import numpy as np
from sklearn.metrics import accuracy_score
from matplotlib import pyplot as plt
%matplotlib inline
import tensorflow as tf
print("We're using TF", tf.__version__)
import sys
sys.path.append("..")
import grading
import matplotlib_utils
from importlib import reload
reload(matplotlib_utils)
import keras_utils
from keras_utils import reset_tf_session<jupyter_output><empty_output><jupyter_text># Look at the data
In this task we have 50000 28x28 images of digits from 0 to 9.
We will train a classifier on this data.<jupyter_code>import preprocessed_mnist
X_train, y_train, X_val, y_val, X_test, y_test = preprocessed_mnist.load_dataset()
# X contains rgb values divided by 255
print("X_train [shape %s] sample patch:\n" % (str(X_train.shape)), X_train[1, 15:20, 5:10])
print("A closeup of a sample patch:")
plt.imshow(X_train[1, 15:20, 5:10], cmap="Greys")
plt.show()
print("And the whole sample:")
plt.imshow(X_train[1], cmap="Greys")
plt.show()
print("y_train [shape %s] 10 samples:\n" % (str(y_train.shape)), y_train[:10])<jupyter_output><empty_output><jupyter_text># Linear model
Your task is to train a linear classifier $\vec{x} \rightarrow y$ with SGD using TensorFlow.
You will need to calculate a logit (a linear transformation) $z_k$ for each class:
$$z_k = \vec{x} \cdot \vec{w_k} + b_k \quad k = 0..9$$
And transform logits $z_k$ to valid probabilities $p_k$ with softmax:
$$p_k = \frac{e^{z_k}}{\sum_{i=0}^{9}{e^{z_i}}} \quad k = 0..9$$
We will use a cross-entropy loss to train our multi-class classifier:
$$\text{cross-entropy}(y, p) = -\sum_{k=0}^{9}{\log(p_k)[y = k]}$$
where
$$
[x]=\begin{cases}
1, \quad \text{if $x$ is true} \\
0, \quad \text{otherwise}
\end{cases}
$$
Cross-entropy minimization pushes $p_k$ close to 1 when $y = k$, which is what we want.
Here's the plan:
* Flatten the images (28x28 -> 784) with `X_train.reshape((X_train.shape[0], -1))` to simplify our linear model implementation
* Use a matrix placeholder for flattened `X_train`
* Convert `y_train` to one-hot encoded vectors that are needed for cross-entropy
* Use a shared variable `W` for all weights (a column $\vec{w_k}$ per class) and `b` for all biases.
* Aim for ~0.93 validation accuracy<jupyter_code>X_train_flat = X_train.reshape((X_train.shape[0], -1))
print(X_train_flat.shape)
X_val_flat = X_val.reshape((X_val.shape[0], -1))
print(X_val_flat.shape)
import keras # we use keras only for keras.utils.to_categorical
y_train_oh = keras.utils.to_categorical(y_train, 10)
y_val_oh = keras.utils.to_categorical(y_val, 10)
print(y_train_oh.shape)
print(y_train_oh[:3], y_train[:3])
# run this again if you remake your graph
s = reset_tf_session()
# Model parameters: W and b
W = ### YOUR CODE HERE ### tf.get_variable(...) with shape[0] = 784
b = ### YOUR CODE HERE ### tf.get_variable(...)
# Placeholders for the input data
input_X = ### YOUR CODE HERE ### tf.placeholder(...) for flat X with shape[0] = None for any batch size
input_y = ### YOUR CODE HERE ### tf.placeholder(...) for one-hot encoded true labels
# Compute predictions
logits = ### YOUR CODE HERE ### logits for input_X, resulting shape should be [input_X.shape[0], 10]
probas = ### YOUR CODE HERE ### apply tf.nn.softmax to logits
classes = ### YOUR CODE HERE ### apply tf.argmax to find a class index with highest probability
# Loss should be a scalar number: average loss over all the objects with tf.reduce_mean().
# Use tf.nn.softmax_cross_entropy_with_logits on top of one-hot encoded input_y and logits.
# It is identical to calculating cross-entropy on top of probas, but is more numerically friendly (read the docs).
loss = ### YOUR CODE HERE ### cross-entropy loss
# Use a default tf.train.AdamOptimizer to get an SGD step
step = ### YOUR CODE HERE ### optimizer step that minimizes the loss
s.run(tf.global_variables_initializer())
BATCH_SIZE = 512
EPOCHS = 40
# for logging the progress right here in Jupyter (for those who don't have TensorBoard)
simpleTrainingCurves = matplotlib_utils.SimpleTrainingCurves("cross-entropy", "accuracy")
for epoch in range(EPOCHS): # we finish an epoch when we've looked at all training samples
batch_losses = []
for batch_start in range(0, X_train_flat.shape[0], BATCH_SIZE): # data is already shuffled
_, batch_loss = s.run([step, loss], {input_X: X_train_flat[batch_start:batch_start+BATCH_SIZE],
input_y: y_train_oh[batch_start:batch_start+BATCH_SIZE]})
# collect batch losses, this is almost free as we need a forward pass for backprop anyway
batch_losses.append(batch_loss)
train_loss = np.mean(batch_losses)
val_loss = s.run(loss, {input_X: X_val_flat, input_y: y_val_oh}) # this part is usually small
train_accuracy = accuracy_score(y_train, s.run(classes, {input_X: X_train_flat})) # this is slow and usually skipped
valid_accuracy = accuracy_score(y_val, s.run(classes, {input_X: X_val_flat}))
simpleTrainingCurves.add(train_loss, val_loss, train_accuracy, valid_accuracy)<jupyter_output><empty_output>
|
permissive
|
/advance ml courera assignments/digits_classification.ipynb
|
rahul263-stack/PROJECT-Dump
| 3 |
<jupyter_start><jupyter_text># Exercise 1### Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data### Step 2. Download the dataset to your computer and unzip it.### Step 3. Use the tsv file and assign it to a dataframe called food<jupyter_code>import pandas as pd
PATH_TO_FILE = "/Users/dag/Downloads/en.openfoodfacts.org.products.tsv"
food = pd.read_csv(PATH_TO_FILE, sep = "\t")<jupyter_output>/Users/dag/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3049: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
<jupyter_text>### Step 4. See the first 5 entries<jupyter_code>food.head(5) # defaults to 5!<jupyter_output><empty_output><jupyter_text>### Step 5. What is the number of observations in the dataset?<jupyter_code>food.shape[0]<jupyter_output><empty_output><jupyter_text>### Step 6. What is the number of columns in the dataset?<jupyter_code>food.shape[1]<jupyter_output><empty_output><jupyter_text>### Step 7. Print the name of all the columns.<jupyter_code>[col for col in food.columns]<jupyter_output><empty_output><jupyter_text>### Step 8. What is the name of 105th column?<jupyter_code>food.columns[105] # remember 0 indexing!<jupyter_output><empty_output><jupyter_text>### Step 9. What is the type of the observations of the 105th column?<jupyter_code>food[food.columns[105]].dtype<jupyter_output><empty_output><jupyter_text>### Step 10. How is the dataset indexed?<jupyter_code>food.index<jupyter_output><empty_output><jupyter_text>### Step 11. What is the product name of the 19th observation?<jupyter_code>food["product_name"].iloc[19] # remember 0 indexing!
# Solution proposes:
print(food.values[18][7])
# seems to cause performance issues b/c python first returns all values
# and only then subsets them<jupyter_output>Lotus Organic Brown Jasmine Rice
|
permissive
|
/01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises.ipynb
|
dagtann/pandas_exercises
| 9 |
<jupyter_start><jupyter_text># Logistička regresija<jupyter_code>import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
np.random.seed(10)<jupyter_output><empty_output><jupyter_text>U zadatku binarne klasifikacije, ciljna promenljiva može imati dve vrednosti. Njih obično obeležavamo sa `0` i `1` ili `-1` i `1`. Za klasu kojoj je pridruženo obeležje `0` (ili `-1`) kažemo da je **negativna** klasa, dok za klasu kojoj je pridruženo obeležje `1` kažemo da je **pozitivna** klasa. **Logistička regresija** (engl. logistic regression) je jedan od osnovnih modela binarne klasifikacije. Ona pretpostavlja linearnu vezu između ulaznih veličina $X_1$, $X_2$, ..., $X_m$. Ciljna funkcija logističke regresije je zadata formulom $$f_\beta(X)= \frac{1}{1+e^{-(\beta_0 + \beta_1X_1 + \ldots + \beta_mX_m)}} = \frac{1}{1+e^{-\beta^T X}} = \sigma(\beta^TX)$$ gde je $\sigma(x)$ sigmoidna funkcija određena jednačinom $\sigma(x) = \frac{1}{1+e^{-x}}$. Vrednost ciljne funkcije se može interpretirati kao verovatnoća pripadnosti odredjenoj klasi: ukoliko je njena vrednost veća od 0.5, klasifikator predviđa pozitivnu klasu, dok u suprotnom predviđa negativnu klasu. ## Sigmoidna funkcija<jupyter_code># proizvoljna ekvidistantna mreza 100 tacaka
N = 100
x = np.linspace(-10, 10, N)
y = 1/(1 + np.exp(-x))
plt.plot(x, y, label = 'sigmoidna funkcija')
plt.plot(x, [0.5]*N, color = 'red')
plt.plot(x[70], y[70], color='blue', marker='o')
plt.legend(loc = 'best')
plt.show()<jupyter_output><empty_output><jupyter_text>Ukoliko je vrednost sigmoidne funkcije u tački `x` veća od `0.5`, njoj se pridružuje obeležje `1`. U suprotnom joj se pridružuje obeležje `0`.## Unakrsna entropijaParametri modela logističke regresije se određuju minimizacijom funkcije gubitka. U slučaju logističke regresije to je `unakrsna entropija` (engl. binary crossentropy) određena izrazom $$-\sum_{i=1}^{N} (y_i logf_\beta(x_i) + (1-y_i)log(1-f_\beta(x_i)))$$ u kojem $(x_i, y_i)$ predstavljaju instance skupa za treniranje, a $N$ ukupan broj instanci ovog skupa.<jupyter_code># proizvoljna ekvidistantna mreza 50 tacaka intervala [0, 1]
N = 50
y_i = np.linspace(0, 1, N, endpoint=True)
# vrednosti funkcije log(y)
loss_part_positive = -np.log(y_i)
# vrednosti funkcije log(1-y)
loss_part_negative = -np.log(1-y_i)
plt.figure(figsize = (10, 5))
plt.subplot(1, 2, 1)
plt.title('Doprinos funkciji gubitka pozitivnih instanci')
plt.plot(y_i, loss_part_positive, color ='b')
plt.subplot(1, 2, 2)
plt.title('Doprinos funkciji gubitka negativnih instancu')
plt.plot(y_i, loss_part_negative, color ='r')
plt.show()<jupyter_output><empty_output><jupyter_text>Ako je za instancu $(x_i, y_i)$ očekivana vrednost klasifikatora $1$, što više grešimo sa predikcijom $y_i$ i što više odstupamo od očekivane vrednosti, veći je i doprinos funkciji gubitka (grafik levo). Ako je očekivana vrednost klasifikatora $0$, što više odstupamo od ove vrednosti, veći je doprinos funkciji gubitka (grafik desno). # PrimerU primeru koji sledi model logističke regresije biće iskorišćen za klasifikaciji tumora na maligne i benigne. Koristićemo Viskonsis skup podataka o kojem se može više pročitati na zvaničnom [UCI](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)) sajtu.<jupyter_code>from sklearn import linear_model
from sklearn import model_selection
from sklearn import metrics
from sklearn import preprocessing
from sklearn import datasets
data = datasets.load_breast_cancer()<jupyter_output><empty_output><jupyter_text>### Analiza podataka<jupyter_code># print(data.DESCR)<jupyter_output><empty_output><jupyter_text>Svaka instanca skupa opisana je sa 30 različitih atributa.<jupyter_code>data.feature_names
X = pd.DataFrame(data.data, columns = data.feature_names)
X.shape
# X.info()<jupyter_output><empty_output><jupyter_text>Vrednosti ciljne promenljive su `0` i `1` i redom su vezane za maligne i benigne tumore.<jupyter_code>data.target_names
y = data.target
print('Broj benignih instanci: ')
np.sum(y==1)
print('Broj malignih instanci: ')
np.sum(y==0)<jupyter_output>Broj malignih instanci:
<jupyter_text>### Podela podataka na skup za treniranje i skup za testiranjePrilikom podele skupa podataka na skupove za treniranje i testiranje, vodićemo računa o `stratifikaciji` (parametar stratify). Stratifikacija je način podele podataka kojim se čuva distribucija klasa. <jupyter_code>X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size = 0.33, random_state = 7, stratify = y)
X_test.shape<jupyter_output><empty_output><jupyter_text>Provera stratifikovanosti:<jupyter_code>m_train = np.sum(y_train == 0)
b_train = np.sum(y_train == 1)
print('Benigni: ', b_train, 'Maligni: ', m_train)
m_test = np.sum(y_test == 0)
b_test = np.sum(y_test == 1)
print('Benigni: ', b_test, 'Maligni: ', m_test)
plt.title("Vizuelna provera stratifikovanosti")
plt.xticks([0,1])
plt.xlabel(' 0 - maligni tumor, 1 - benigni tumor')
plt.hist([y_train, y_test], color=['orange', 'cadetblue'], label=['skup za treniranje', 'skup za testiranje'])
plt.legend(loc='best')
plt.show()<jupyter_output><empty_output><jupyter_text>### Standaradizacija podataka<jupyter_code>scaler = preprocessing.StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)<jupyter_output><empty_output><jupyter_text>### Učenje modela<jupyter_code>model = linear_model.LogisticRegression(solver='lbfgs')
model.fit(X_train, y_train)<jupyter_output><empty_output><jupyter_text>Rezultujući parametri se mogu pročitati kroz `intercept_` i `coef_` svojstva.<jupyter_code>model.intercept_
model.coef_<jupyter_output><empty_output><jupyter_text>Interpretacija vrednosti koeficijenata je važna zbog razumevanja samog modela i uticaja atributa.<jupyter_code>N = len(data.feature_names)
values = model.coef_[0]
plt.figure(figsize = (10, 5))
plt.bar(np.arange(0, N),values)
plt.xticks(np.arange(0, N), data.feature_names, rotation='vertical')
plt.show()<jupyter_output><empty_output><jupyter_text>Na primer, na ovaj način se može uvideti da atributi kao što su `radius error`, `worst texture` ili `worst area` važni u predviđanju negativne klase tj. maligni tumor, doku je atribut `compactness error` važan za predikciju pozitivne klase tj. benigni tumor. ### Evaluacija<jupyter_code>y_test_predicted = model.predict(X_test)<jupyter_output><empty_output><jupyter_text>Sve mere od interesa se nalaze u `metrics` biblioteci.<jupyter_code>metrics.accuracy_score(y_test, y_test_predicted)
metrics.precision_score(y_test, y_test_predicted)
metrics.recall_score(y_test, y_test_predicted)
metrics.f1_score(y_test, y_test_predicted)
y_train_predicted = model.predict(X_train)
train_score = metrics.accuracy_score(y_train, y_train_predicted)
test_score = metrics.accuracy_score(y_test, y_test_predicted)
print("Tacnost na skupu za treniranje: {train}\nTacnost na skupu za testiranje: {test}".format(train=train_score, test=test_score))<jupyter_output>Tacnost na skupu za treniranje: 0.989501312335958
Tacnost na skupu za testiranje: 0.9787234042553191
<jupyter_text>Sumarna evaluacija po klasama može se pročitati iz klasifikacionog izveštaja dostupnog u `metrics` biblioteci.<jupyter_code>print(metrics.classification_report(y_test, y_test_predicted))<jupyter_output> precision recall f1-score support
0 0.96 0.99 0.97 70
1 0.99 0.97 0.98 118
accuracy 0.98 188
macro avg 0.97 0.98 0.98 188
weighted avg 0.98 0.98 0.98 188
<jupyter_text>`Makro prosek` (u klasifikacionom izveštaju obeležen kao `macro avg`) se računa kao aritmetička sredina dobijenih vrednosti po klasama. Na primer, makro prosek za F1-meru se računa kao $\frac{0.97+0.98}{2} = 0.975 $`Težinski prosek` (u klasifikacionom izveštaju obeležen kao `weighted avg`) uzima u obzir i broj instanci koje služe kao potpora ocenjivanju. Na primer, težinski prosek za preciznost se računa kao $\frac{70}{188} \cdot 0.96 + \frac{118}{188} \cdot 0.99 = 0.978829$Matrica konfuzije se može dobiti pozivom metode `confusion_matrix` biblioteke `metrics`.<jupyter_code>metrics.confusion_matrix(y_test, y_test_predicted)<jupyter_output><empty_output><jupyter_text>Organizacija ove matrice je sledeća:
na poziciji (0, 0) se nalazi TN vrednost, na poziciji (1, 0) FN vrednost, na poziciji (1, 1) je TP vrednost, i konačno, na poziciji (0, 1) je FP vrednost. Za detaljniju analizu prstora greški je često potrebno proučiti instance obeležene kao lažno pozitivne (FP) ili lažno negativne (FN). <jupyter_code># Izdvajanje lažno pozitivnih (FP) instanci
# False positive: y_true = 0, y_predicted = 1
FP_mask = (y_test == 0) & (y_test_predicted == 1)
np.where(FP_mask == True)
X_test[38]
# Izdvajanje lažno negativnih (FN) instanci
# False negative: y_true =1, y_predicted = 0
FN_maska = (y_test == 1) & (y_test_predicted == 0)
# broj lazno negativnih instanci
np.sum(FN_maska)
# indeksi lazno negativnih instanci
ind = np.where(FN_maska == True)
ind
# lazno negativne instance
X_test[ind]<jupyter_output><empty_output><jupyter_text>### Analiza sigurnosti klasifikatoraBiblioteka omogućava da se pomoću funkcije modela `predict_proba` za svaku instancu pojedinačno dobiju i ocene verovatnoća pripadnosti pozitivnoj tj. negativnoj klasi. Na primer, za prvih 10 instanci klasifikator daje sledeći rezultat: <jupyter_code>y_test_predicted[0:10]<jupyter_output><empty_output><jupyter_text>Verovatnoće pripadnosti negativnoj tj. pozitivnoj klasi za ovih 10 instanci se mogu dobiti sledećim kodom:<jupyter_code>y_probabilities_predicted = model.predict_proba(X_test)
y_probabilities_predicted[0:10]<jupyter_output><empty_output><jupyter_text>Oko predikcije nekih vrednosti klasifikator može biti vrlo siguran, dok u nekim slučajevima to ne mora biti tako. Postavljanjem odgovarajućeg praga može se vršiti prilagođavanje problemu koji se rešava i uticati na strogost klasifikatora.<jupyter_code>model_confidence = []
for p in y_probabilities_predicted:
model_confidence.append(np.abs(p[0]-p[1]))
model_confidence = np.array(model_confidence)
# instance uredjene na osnovu sigurnosti klasifikatora u svoju odluku
# prvo su navedeni indeksi instanci oko cije klasifikacije klasifikator nije sasvim siguran
model_confidence.argsort()
model_confidence.sort()
# uredjeni niz razlike verovatnoca
model_confidence<jupyter_output><empty_output><jupyter_text>Slična funkcionalnost je dostupna i preko metode `predict_log_proba` modela. Jedina razlika je u tome što su verovatnoće pripadanja na logaritamskoj skali. <jupyter_code>y_log_probabilities_predicted = model.predict_log_proba(X_test)
y_log_probabilities_predicted[0:10]<jupyter_output><empty_output><jupyter_text>### Čuvanje modela i buduće korišćenjeZa čuvanje modela može se koristiti paket `pickle` i njegova metoda `dump` za serijalizaciju.<jupyter_code>import pickle
with open('models/lr_classifier.model.pickle', 'wb') as model_file:
pickle.dump(model, model_file)
with open('models/lr_classifier.scaler.pickle', 'wb') as scaler_file:
pickle.dump(scaler, scaler_file)<jupyter_output><empty_output><jupyter_text>Pre budućeg korišćenja, model i scaler se mogu deserijalizovati korišćenjem metode `load` paketa `pickle`.<jupyter_code>with open('models/lr_classifier.model.pickle', 'rb') as model_file:
model_revived = pickle.load(model_file)
with open('models/lr_classifier.scaler.pickle', 'rb') as scaler_file:
scaler_revived = pickle.load(scaler_file)<jupyter_output><empty_output><jupyter_text>Koraci klasifikacije nove instance `x_new` su: <jupyter_code>x_new = np.random.rand(N).reshape(1, -1)
x_new_transformed = scaler_revived.transform(x_new)
y_new = model_revived.predict(x_new_transformed)
y_new<jupyter_output><empty_output>
|
no_license
|
/nedelja4/04-Logistička regresija.ipynb
|
anjavelickovic/materijali-sa-vezbi-2021
| 25 |
<jupyter_start><jupyter_text># Rigid/Affine + LDDMM Registration<jupyter_code>import sys
sys.path.insert(0,'../') # add code directory to path
# import lddmm
import torch_lddmm
# import numpy
import numpy as np
# import nibabel for i/o
import nibabel as nib
# import matplotlib for display
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>## Load images as numpy arrays - Human Brain MRI<jupyter_code># set image file names
template_file_name = '../notebook/human_brain_affine.img'
target_file_name = '../notebook/Adt27-55_03_Adt27-55_03_MNI.img'
# load images
template_image_struct = nib.load(template_file_name)
target_image_struct = nib.load(target_file_name)
# set image spacing from template image, assume both images are the same spacing
dx = template_image_struct.header['pixdim'][1:4]
# get images as 3D numpy arrays
template_image = np.squeeze(template_image_struct.get_data()).astype(np.float32)
target_image = np.squeeze(target_image_struct.get_data()).astype(np.float32)
# draw a slice of each brain
plt.rcParams["figure.figsize"]=20,20
plt.figure()
plt.subplot(1,4,1)
plt.imshow(template_image[:,:,100])
plt.title('Template Image')
plt.subplot(1,4,2)
plt.imshow(target_image[:,:,100])
plt.title('Target Image')
plt.subplot(1,4,3)
plt.imshow(template_image[105,:,:])
plt.title('Template Image')
plt.subplot(1,4,4)
plt.imshow(target_image[105,:,:])
plt.title('Target Image')
plt.show()<jupyter_output><empty_output><jupyter_text>## Initialize with rigid-only alignment<jupyter_code># create torch_lddmm object
# here, we manually set:
# do_affine = 1 (this indicates rigid, 2 indicates affine)
# do_lddmm = 0
lddmm = torch_lddmm.LDDMM(template=template_image,target=target_image,outdir='../notebook/',do_affine=2,do_lddmm=0,niter=750,epsilonL=2e-9,epsilonT=2e-10,sigma=20.0,optimizer='gdr',dx=dx)
# run computation
lddmm.run()<jupyter_output>WARNING: nt set to 1 because settings indicate affine registration only.
iter: 0, E = 30311928.0000, ER = 0.0000, EM = 30311928.0000, epd = 0.005000.
iter: 1, E= 29757636.000, ER= 0.000, EM= 29757636.000, epd= 0.005, time= 0.11s.
iter: 2, E= 29629528.000, ER= 0.000, EM= 29629528.000, epd= 0.005, time= 0.09s.
iter: 3, E= 29529578.000, ER= 0.000, EM= 29529578.000, epd= 0.005, time= 0.09s.
iter: 4, E= 29432860.000, ER= 0.000, EM= 29432860.000, epd= 0.005, time= 0.09s.
iter: 5, E= 29322020.000, ER= 0.000, EM= 29322020.000, epd= 0.005, time= 0.09s.
iter: 6, E= 29196944.000, ER= 0.000, EM= 29196944.000, epd= 0.005, time= 0.09s.
iter: 7, E= 29069780.000, ER= 0.000, EM= 29069780.000, epd= 0.005, time= 0.09s.
iter: 8, E= 28940280.000, ER= 0.000, EM= 28940280.000, epd= 0.005, time= 0.09s.
iter: 9, E= 28808744.000, ER= 0.000, EM= 28808744.000, epd= 0.005, time= 0.09s.
iter: 10, E= 28677908.000, ER= 0.000, EM= 28677908.000, epd= 0.005, time= 0.09s.
iter: 11, E= 28550038.000, ER= 0.000, EM= 2855003[...]<jupyter_text>## display result of rigid-only alignment<jupyter_code># display the deformed template according to the current transform
(deformed_template,_,_,_) = lddmm.applyThisTransform(template_image)
deformed_template = deformed_template[-1].cpu().numpy()
plt.figure()
plt.subplot(1,4,1)
plt.imshow(deformed_template[:,:,100])
plt.title('Deformed Template')
plt.subplot(1,4,2)
plt.imshow(target_image[:,:,100])
plt.title('Target Image')
plt.subplot(1,4,3)
plt.imshow(deformed_template[105,:,:])
plt.title('Deformed Template')
plt.subplot(1,4,4)
plt.imshow(target_image[105,:,:])
plt.title('Target Image')
plt.show()<jupyter_output><empty_output><jupyter_text>## continue optimization from current state with affine-only alignment<jupyter_code># affine-only alignment
lddmm.setParams('do_affine',1)
lddmm.setParams('niter',200)
lddmm.setParams('epsilonL',lddmm.GDBetaAffineR*lddmm.params['epsilonL']) # reduce step size, here we set it to the current size
lddmm.setParams('epsilonT',lddmm.GDBetaAffineT*lddmm.params['epsilonT']) # reduce step size, here we set it to the current size
lddmm.run()<jupyter_output>Parameter 'do_affine' changed to '1'.
Parameter 'niter' changed to '200'.
Parameter 'epsilonL' changed to '1.877496067529548e-13'.
Parameter 'epsilonT' changed to '1.8774960675295478e-14'.
iter: 0, E = 12722790.0000, ER = 0.0000, EM = 12722790.0000, epd = 0.005000.
iter: 1, E= 12722563.000, ER= 0.000, EM= 12722563.000, epd= 0.005, time= 0.09s.
iter: 2, E= 12722346.000, ER= 0.000, EM= 12722346.000, epd= 0.005, time= 0.09s.
iter: 3, E= 12722121.000, ER= 0.000, EM= 12722121.000, epd= 0.005, time= 0.09s.
iter: 4, E= 12721901.000, ER= 0.000, EM= 12721901.000, epd= 0.005, time= 0.09s.
iter: 5, E= 12721678.000, ER= 0.000, EM= 12721678.000, epd= 0.005, time= 0.09s.
iter: 6, E= 12721454.000, ER= 0.000, EM= 12721454.000, epd= 0.005, time= 0.09s.
iter: 7, E= 12721232.000, ER= 0.000, EM= 12721232.000, epd= 0.005, time= 0.09s.
iter: 8, E= 12721013.000, ER= 0.000, EM= 12721013.000, epd= 0.005, time= 0.09s.
iter: 9, E= 12720785.000, ER= 0.000, EM= 12720785.000, epd= 0.005, time= 0.09s.
iter: 10, E= 1[...]<jupyter_text>## display result of rigid-only + affine-only alignment<jupyter_code># display the deformed template according to the current transform
(deformed_template,_,_,_) = lddmm.applyThisTransform(template_image)
deformed_template = deformed_template[-1].cpu().numpy()
plt.figure()
plt.subplot(1,4,1)
plt.imshow(deformed_template[:,:,100])
plt.title('Deformed Template')
plt.subplot(1,4,2)
plt.imshow(target_image[:,:,100])
plt.title('Target Image')
plt.subplot(1,4,3)
plt.imshow(deformed_template[105,:,:])
plt.title('Deformed Template')
plt.subplot(1,4,4)
plt.imshow(target_image[105,:,:])
plt.title('Target Image')
plt.show()<jupyter_output><empty_output><jupyter_text>## continue optimization with tandem lddmm and affine registration<jupyter_code>lddmm.setParams('do_lddmm',1) # do_affine is still 1, so we will optimize both affine and diffeo simultaneously
lddmm.setParams('niter',200)
lddmm.setParams('a',7)
lddmm.setParams('epsilon',4e0)
lddmm.setParams('sigmaR',40.0)
lddmm.setParams('epsilonL',lddmm.GDBetaAffineR*lddmm.params['epsilonL']) # reduce step size, here we set it to the current size
lddmm.setParams('epsilonT',lddmm.GDBetaAffineT*lddmm.params['epsilonT']) # reduce step size, here we set it to the current size
lddmm.run()
# display the deformed template according to the current transform
(deformed_template,_,_,_) = lddmm.applyThisTransform(template_image)
deformed_template = deformed_template[-1].cpu().numpy()
plt.figure()
plt.subplot(1,4,1)
plt.imshow(deformed_template[:,:,100])
plt.title('Deformed Template')
plt.subplot(1,4,2)
plt.imshow(target_image[:,:,100])
plt.title('Target Image')
plt.subplot(1,4,3)
plt.imshow(deformed_template[105,:,:])
plt.title('Deformed Template')
plt.subplot(1,4,4)
plt.imshow(target_image[105,:,:])
plt.title('Target Image')
plt.show()<jupyter_output><empty_output><jupyter_text>## finally, do LDDMM only<jupyter_code>lddmm.setParams('do_affine',0) # do_lddmm is still 1
lddmm.setParams('niter',150)
lddmm.setParams('a',7)
lddmm.setParams('epsilon',float(lddmm.params['epsilon']*lddmm.GDBeta.cpu().numpy()))
lddmm.run()
# display the deformed template according to the current transform
(deformed_template,_,_,_) = lddmm.applyThisTransform(template_image)
deformed_template = deformed_template[-1].cpu().numpy()
plt.figure()
plt.subplot(1,4,1)
plt.imshow(deformed_template[:,:,100])
plt.title('Deformed Template')
plt.subplot(1,4,2)
plt.imshow(target_image[:,:,100])
plt.title('Target Image')
plt.subplot(1,4,3)
plt.imshow(deformed_template[105,:,:])
plt.title('Deformed Template')
plt.subplot(1,4,4)
plt.imshow(target_image[105,:,:])
plt.title('Target Image')
plt.show()
# display intensity difference before and after mapping
diffimg_before = np.abs(template_image-target_image)
diffimg_after = np.abs(deformed_template-target_image)
plt.figure()
plt.subplot(1,4,1)
plt.imshow(diffimg_before[:,:,100],vmin=0,vmax=255)
plt.title('Error Before Registration')
plt.subplot(1,4,2)
plt.imshow(diffimg_after[:,:,100],vmin=0,vmax=255)
plt.title('Error After Registration')
plt.subplot(1,4,3)
plt.imshow(diffimg_before[105,:,:],vmin=0,vmax=255)
plt.title('Error Before Registration')
plt.subplot(1,4,4)
plt.imshow(diffimg_after[105,:,:],vmin=0,vmax=255)
plt.title('Error After Registration')
plt.show()
# output transforms
(vt0,vt1,vt2,A) = lddmm.outputTransforms() # output LDDMM and linear transforms
(phi0,phi1,phi2) = lddmm.computeThisDisplacement() # output resultant displacement field
deformed_template = lddmm.outputDeformedTemplate() # output deformed template as numpy array
# clear memory (the LDDMM object still exists and consumes some GPU memory but transforms are deleted)
lddmm.delete()<jupyter_output><empty_output>
|
no_license
|
/examples/.ipynb_checkpoints/6_Affine_plus_LDDMM_Registration-checkpoint.ipynb
|
brianlee324/torch-lddmm
| 8 |
<jupyter_start><jupyter_text># Introduction to Pandas : Part 1
-------
This tutorial is heavily based on [Pandas in 10 min](https://pandas.pydata.org/pandas-docs/stable/10min.html). The original material waas modified by adding TnSeq data as examples.
## Get datasets to play with<jupyter_code>%%bash
wget https://nekrut.github.io/BMMB554/tnseq_untreated.txt.gz
wget https://nekrut.github.io/BMMB554/ta_gc.txt
!ls -lh
!gunzip -c tnseq_untreated.txt.gz | tail -n 100
data_file = 'tnseq_untreated.txt.gz'<jupyter_output><empty_output><jupyter_text>The first dataset lists coordinates of `TA` sites and counts of reads for TnSeq constructs 'blunt', 'cap', 'dual', 'erm', 'pen', and 'tuf':<jupyter_code>!gunzip -c {data_file} | head -n 100
# Just two choices for beginning of of gene field
!gunzip -c {data_file} | cut -f 8 | cut -f 1 -d '=' | sort | uniq -c
# Process tnseq_untreated.txt.gz to correctly parse gene names
import os
f = open('data.txt','w')
with os.popen('gunzip -c {} '.format(data_file)) as stream:
for line in stream:
if line.split( '\t' )[7].startswith( '.' ):
f.write( '{}\t{}\n'.format( '\t'.join( line.split( '\t' )[:7] ) , 'intergenic' ) )
elif line.split( '\t' )[7].startswith( 'ID' ):
f.write( '{}\t{}\n'.format( '\t'.join( line.split( '\t' )[:7] ) , line.split( '\t' )[7].split(';')[0][3:] ) )
f.close()
!wc -l data.txt
!head -n 100 data.txt
!gunzip -c {data_file} | wc -l
import pandas as pd
tnseq = pd.read_table('data.txt', header=None, names=['pos','blunt','cap','dual','erm','pen','tuf','gene'])
tnseq.head()
# Let's create a small subset of this dataset
df = tnseq[tnseq['blunt'] > 200]
df.head()
df.index
df.describe()
df = df.sort_values(by=['pos'])
df = df.set_index('pos')
df.head()<jupyter_output><empty_output><jupyter_text>## Selection
-------
**Note**: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, `.at`, `.iat`, `.loc` and `.iloc`.
See the indexing documentation:
- [Indexing and Selecting Data](https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing)
- [MultiIndex / Advanced Indexing](https://pandas.pydata.org/pandas-docs/stable/advanced.html#advanced)### GettingSelecting via `[]`, which slices the rows.<jupyter_code>df[0:3]<jupyter_output><empty_output><jupyter_text>Selecting a single column, a `series` can be done in two ways:<jupyter_code>df.gene.head()<jupyter_output><empty_output><jupyter_text>or<jupyter_code>df['gene'].head()<jupyter_output><empty_output><jupyter_text>### Selection by label
See more in [Selection by Label](https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing-label).
For getting a cross section using a label:<jupyter_code>df.loc[2404930]
df.loc[2404930,['erm','pen']]
df.loc[2404930:2404937,['erm','pen']]<jupyter_output><empty_output><jupyter_text>### Selection by position
See more in [Selection by Position](https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-integer)
Select via the position of the passed integers:<jupyter_code>df.iloc[3]<jupyter_output><empty_output><jupyter_text>By integer slices, acting similar to numpy/python:<jupyter_code>df.iloc[3:5,0:2]<jupyter_output><empty_output><jupyter_text>By lists of integer position locations, similar to the numpy/python style:<jupyter_code>df.iloc[[1,2,4],[0,2]]<jupyter_output><empty_output><jupyter_text>For slicing rows explicitly:<jupyter_code>df.iloc[100:,1:5]<jupyter_output><empty_output><jupyter_text>For getting a value explicitly:<jupyter_code>df.iloc[1,1]<jupyter_output><empty_output><jupyter_text>For getting fast access to a scalar (equivalent to the prior method):<jupyter_code>df.iat[1,1]<jupyter_output><empty_output><jupyter_text>### Selecting based on condition (boolean indexing)
Using a single column’s values to select data:<jupyter_code>df[df.gene != 'intergenic'].head()<jupyter_output><empty_output><jupyter_text>Selecting values from a DataFrame where a boolean condition is met:<jupyter_code>df[df > 0].head()<jupyter_output><empty_output><jupyter_text>Using the [`isin()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html#pandas.Series.isin) method for filtering:<jupyter_code>df.gene.unique()
df[df['gene'].isin(['gene2465','gene206'])]<jupyter_output><empty_output><jupyter_text>### Setting
Setting a new column automatically aligns the data by the indexes:<jupyter_code>gc = pd.read_table('ta_gc.txt', header=None, names=['pos','gc'])
gc = gc.set_index('pos')
gc.head()
df['gc'] = gc
df.head()<jupyter_output><empty_output><jupyter_text>Setting values by label:<jupyter_code>df.at[2,'erm'] = 0
df.loc[2]
df = df.sort_index()
df.head()<jupyter_output><empty_output><jupyter_text>## Missing data
-------
pandas primarily uses the value `np.nan` to represent missing data. It is by default not included in computations. See the [Missing Data section](https://pandas.pydata.org/pandas-docs/stable/missing_data.html#missing-data).To drop any rows that have missing data:<jupyter_code>df.dropna(how='any').head()<jupyter_output><empty_output><jupyter_text>Filling missing data<jupyter_code>df.fillna(value='0').head()
df.isna().head()<jupyter_output><empty_output><jupyter_text>## Operations
-------
See the [Basic section on Binary Ops](https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-binop).### Stats
Operations in general exclude missing data.
Performing a descriptive statistic:<jupyter_code>df.mean()<jupyter_output><empty_output><jupyter_text>Same operation on the other axis:<jupyter_code>df.mean(1).head()<jupyter_output><empty_output><jupyter_text>### Apply
Apply functions to the data:<jupyter_code>df.head()
import numpy as np
df.loc[:,'blunt':'tuf'].apply(np.cumsum).head()<jupyter_output><empty_output><jupyter_text>### Hostogramming
See more at [Histogramming and Discretization](https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-discretization):<jupyter_code>df['gene'].value_counts()
%matplotlib inline
df.loc[:,'blunt':'tuf'].hist(bins=100, sharex=True, sharey=True)<jupyter_output><empty_output><jupyter_text>### String Methods
Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses [regular expressions](https://docs.python.org/3/library/re.html) by default (and in some cases always uses them). See more at [Vectorized String Methods](https://pandas.pydata.org/pandas-docs/stable/text.html#text-string-methods):<jupyter_code>df['gene'].str.upper().head()<jupyter_output><empty_output><jupyter_text>In the next lecture we will learn how to process data in a number of interesting ways<jupyter_code><jupyter_output><empty_output>
|
no_license
|
/tnseq_with_pandas.ipynb
|
xxihe/sandbox2019
| 25 |
<jupyter_start><jupyter_text>ASWATHI.G
DATA SCIENCE AND BUSSINESS ANALYTICS INTERN @THE SPARKS FOUNDATION-JUNE2021TASK-6 Prediction suing Decision Tree Algorithm
Problem
Create the Decison Tree Classifier and Visualize it graphically.
The purpose is if we need any new data to this classifier,
it would be able to predict the right class accordingly.
# Data PreProcessing:<jupyter_code>#importing the required libraries
import pandas as pd
import numpy as py
import matplotlib.pyplot as plt
import seaborn as sns
#load the data
df = pd.read_csv("Iris.csv")
df.head(10)
df.tail()
df.isna().sum()
df.Species.value_counts()
df.tail()
df.describe()
df.drop(["Id"],axis='columns',inplace=True)
sns.pairplot(df,hue="Species")
#seperating features and labels
X=df.iloc[:, :-1].values
y=df.iloc[:, -1].values
#label encoding the output
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
y=le.fit_transform(y)
#performing train test split
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.20)<jupyter_output><empty_output><jupyter_text># Traning the model:
<jupyter_code>#importinf Decision Tree Classifier skleran library,training the model using training sets
from sklearn.tree import DecisionTreeClassifier
classifier=DecisionTreeClassifier(criterion='gini',random_state=None)
classifier.fit(X_train,y_train)<jupyter_output><empty_output><jupyter_text># Prediction:<jupyter_code>#Prediction for the test data
y_pred=classifier.predict(X_test)
y_pred=le.inverse_transform(y_pred)
y_test=le.inverse_transform(y_test)
y_pred
from sklearn.metrics import accuracy_score,confusion_matrix
cm=confusion_matrix(y_test,y_pred)
print("Confiusion_Matrix is: \n",cm)
ac=accuracy_score(y_test,y_pred)
print("Accuracy_Score is: \n",ac)
<jupyter_output>Confiusion_Matrix is:
[[ 9 0 0]
[ 0 9 1]
[ 0 0 11]]
Accuracy_Score is:
0.9666666666666667
<jupyter_text># Visualising Graphically:<jupyter_code>from sklearn import tree
plt.figure(figsize=(30,25))
dot_df=tree.plot_tree(classifier,feature_names=['SepalLength(CM)','SepalWidth(CM)','PetalLength(CM)','PeatlWidth(CM)'],class_names=['Iris-sentosa','Iris-versicolor','Iris-virginica'],filled=True)
plt.title("Decision Tree Visualization")
plt.show()
<jupyter_output><empty_output>
|
no_license
|
/TASK-6 DECISION TREE ALGORITHAM (1).ipynb
|
Aswathi-G1011/DecisionTree
| 4 |
<jupyter_start><jupyter_text># SQL (in Python)Powerpoint แนะนำ Database อยู่ใน mycoursevilleเพื่อความง่าย แบบฝึกหัดนี้เราจะใช้ SQL ที่มีเป็น Library ใน Python อยู่แล้วในการเรียนคำสั่ง SQL แต่เวลาทำงานจริงเรามักจะลง software database ในเครื่องแล้วเขียน Python เชื่อมต่อ database ตัวนั้น ๆ (เช่น PostgreSQL จะใช้ library psycopg2 เป็นตัวเชื่อมต่อ)ให้ copy code ต่อไปนี้เพื่อรันใน colab ของตัวเองหรือใน Jupyter Notebook ของตัวเอง
เป็นการสร้างตาราง 2 ตาราง จากตัวอย่างในเว็บ https://en.wikibooks.org/wiki/SQL_Exercises/The_computer_storeถ้าใช้ jupyter notebook ในเครื่องตัวเอง ให้ลง sqlite3 ใน environment ของตัวเองก่อนด้วยคำสั่ง conda install -c blaze sqlite3## 1. สร้าง Database
<jupyter_code># CREATING THE TABLE
import sqlite3
conn = sqlite3.connect('sqltest.db')
print("Opened database successfully");
conn.execute('''
CREATE TABLE IF NOT EXISTS Manufacturers (
Code INTEGER PRIMARY KEY NOT NULL,
Name CHAR(50) NOT NULL
);''')
conn.commit()
conn.execute('''
CREATE TABLE Products (
Code INTEGER PRIMARY KEY NOT NULL,
Name CHAR(50) NOT NULL ,
Price REAL NOT NULL ,
Manufacturer INTEGER NOT NULL
CONSTRAINT fk_Manufacturers_Code REFERENCES Manufacturers(Code));''')
conn.commit()
print("Table created successfully");<jupyter_output>Opened database successfully
Table created successfully
<jupyter_text>keyword ที่ควรทราบ
* PRIMARY KEY คือ คอลัมน์นั้นห้ามซ้ำ
* NOT NULL คือ ห้ามว่าง
* CONSTRAINT foreign_key_name REFERENCES main_table(key_column) คือสร้าง foreign key โยงสองตารางเข้าด้วยกัน ซึ่งในที่นี้คือให้คอลัมน์ Manufacturer โยงกับ code ในตาราง Manufacturer ผลในการกำหนด foreign key คือ จะไม่อนุญาตให้ทำการเพิ่ม code อื่น ๆ ที่ไม่ได้อยู่ในตาราง Manufacturer
* commit คือการยืนยันคำสั่งก่อนหน้า เราจะสามารถ roll back กลับไปจุด commit ล่าสุดได้ ใช้กับคำสั่ง SQL ประเภท DML <jupyter_code># INSERT VALUES
conn.execute("INSERT INTO Manufacturers(Code,Name) VALUES(1,'Sony');")
conn.execute("INSERT INTO Manufacturers(Code,Name) VALUES(2,'Creative Labs');")
conn.execute("INSERT INTO Manufacturers(Code,Name) VALUES(3,'Hewlett-Packard');")
conn.execute("INSERT INTO Manufacturers(Code,Name) VALUES(4,'Iomega');")
conn.execute("INSERT INTO Manufacturers(Code,Name) VALUES(5,'Fujitsu');")
conn.execute("INSERT INTO Manufacturers(Code,Name) VALUES(6,'Winchester');")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(1,'Hard drive',240,5);")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(2,'Memory',120,6);")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(3,'ZIP drive',150,4);")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(4,'Floppy disk',5,6);")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(5,'Monitor',240,1);")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(6,'DVD drive',180,2);")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(7,'CD drive',90,2);")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(8,'Printer',270,3);")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(9,'Toner cartridge',66,3);")
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(10,'DVD burner',180,2);")<jupyter_output><empty_output><jupyter_text>## 2. ทดลอง queryเลือกทุกแถวทุกคอลัมน์มาจากตาราง
เราสามารถเลือกทุกคอลัมน์ได้โดยใช้เครื่องหมาย *<jupyter_code>for row in conn.execute("SELECT * FROM manufacturers"):
print(row)
for row in conn.execute("select * from Products"):
print(row)<jupyter_output>(1, 'Hard drive', 240.0, 5)
(2, 'Memory', 120.0, 6)
(3, 'ZIP drive', 150.0, 4)
(4, 'Floppy disk', 5.0, 6)
(5, 'Monitor', 240.0, 1)
(6, 'DVD drive', 180.0, 2)
(7, 'CD drive', 90.0, 2)
(8, 'Printer', 270.0, 3)
(9, 'Toner cartridge', 66.0, 3)
(10, 'DVD burner', 180.0, 2)
<jupyter_text>หรือจะใช้คำสั่ง fetchall() ก็ได้<jupyter_code>conn.execute("SELECT * FROM Products").fetchall()<jupyter_output><empty_output><jupyter_text>เลือกแค่บางคอลัมน์<jupyter_code>conn.execute("select name,price from products").fetchall()<jupyter_output><empty_output><jupyter_text>ใส่เงื่อนไขเลือกแถว ใช้คำสั่ง where<jupyter_code>conn.execute("select name,price from products where name='Hard drive'").fetchall()
conn.execute("select * from products where name like 'Z%'").fetchall()
conn.execute("select name,price from products where name like '_I%'").fetchall()
conn.execute("select name,price from products where name is NULL").fetchall()<jupyter_output><empty_output><jupyter_text>เลือกสองตารางมาเชื่อมกัน<jupyter_code>conn.execute("select * from products").fetchall()
conn.execute("select * from manufacturers").fetchall()
conn.execute("select p.name,p.price,p.manufacturer,m.name from products p, manufacturers m where p.manufacturer=m.code").fetchall()<jupyter_output><empty_output><jupyter_text>ดูชื่อ column<jupyter_code>for row in conn.execute("PRAGMA table_info(Products)"):
print(row)
for row in conn.execute("PRAGMA table_info(Manufacturers)"):
print(row)<jupyter_output>(0, 'Code', 'INTEGER', 1, None, 1)
(1, 'Name', 'CHAR(50)', 1, None, 0)
<jupyter_text>เราสามารถสร้างตัวแปรขึ้นมา แล้วนำไปใส่ในคำสั่ง SQL ได้ ตามตัวอย่าง
การสร้างเป็นตัวแปรทำให้เราทำให้เราสามารถสร้างระบบ interactive ได้<jupyter_code>n=('ZIP drive',)
for row in conn.execute("select * from Products where Name=?",n):
print(row)
type(n)<jupyter_output><empty_output><jupyter_text>ถ้าเราจะมีหลาย ๆ ตัวแปร ก็ต้องใช้คำสั่ง executemany<jupyter_code>manus = [(7, 'Sandisk'),
(8, 'Kingston')]
conn.executemany('INSERT INTO Manufacturers VALUES (?,?)', manus)
conn.execute("UPDATE Manufacturers SET name='KKKK' where code=8")
conn.execute("DELETE FROM Manufacturers WHERE code=8")
conn.execute("select * from Manufacturers").fetchall()
type(manus)<jupyter_output><empty_output><jupyter_text>## 3. การใช้งานกับ pandasนำข้อมูลที่ได้จากการ query ไปใส่ไว้ใน dataframe ของ pandas<jupyter_code>import pandas as pd
df = pd.read_sql_query("SELECT * from Products", conn)<jupyter_output><empty_output><jupyter_text>เอาผลใส่ใน dataframe ทำให้พิมพ์ออกมาสวยกว่า<jupyter_code>df<jupyter_output><empty_output><jupyter_text>อ่านเพิ่มที่นี่
https://datacarpentry.org/python-ecology-lesson/09-working-with-sql/index.htmlถ้าจะทำย้อนกลับคือ เอาข้อมูลใน dataframe ไปใส่ใน SQLite ก็ให้ทำตามนี้<jupyter_code>#ตัวอย่างจาก https://datatofish.com/create-pandas-dataframe/
cars = {'Brand': ['Honda Civic','Toyota Corolla','Ford Focus','Audi A4'],
'Price': [22000,25000,27000,35000]
}
df = pd.DataFrame(cars, columns = ['Brand', 'Price'])
df.to_sql('cars', con=conn, if_exists='append')
conn.execute("select * from cars").fetchall()<jupyter_output><empty_output><jupyter_text>## 4. ประเภทของคำสั่ง SQL* DDL(Data Definition Language) คำสั่งจัดการโครงสร้างตาราง เช่น เพิ่ม/ลด คอลัมน์ เปลี่ยนชื่อตาราง
* DQL (Data Query Language) คำสั่ง select
* DML(Data Manipulation Language) คำสั่งเพิ่ม ลด แก้ไข ข้อมูลในตาราง
* DCL(Data Control Language) คำสั่งให้สิทธิ์หรือถอนสิทธิผู้ใช้งาน
* TCL(transaction Control Language) คำสังจัดการ transaction เช่นการ commit/rollback
https://www.geeksforgeeks.org/sql-ddl-dql-dml-dcl-tcl-commands/# SQL Practice 20 question<jupyter_code>conn.execute("select name from products").fetchall()
conn.execute("select name, price from products").fetchall()
conn.execute("select name from products where price <= 200").fetchall()
conn.execute("select name from products where price between 60 and 120").fetchall()
pd.read_sql_query("SELECT avg(price) from Products", conn)
pd.read_sql_query("SELECT avg(price) from Products where manufacturer = 2", conn)
pd.read_sql_query("SELECT count(*) from Products where price>= 180", conn)
pd.read_sql_query("SELECT name , price from Products where price >= 180 order by price DESC, name ASC", conn)
pd.read_sql_query("SELECT p.code as product_code, p.name as product_name, p.price as product_price,p.Manufacturer as product_Manufacturer, m.code as Manufacturer_code, m.name as Manufacturer from Products p join manufacturers m on p.Manufacturer = m.code", conn)
pd.read_sql_query("SELECT p.name as product_name, p.price as product_price, m.name as Manufacturer from Products p join manufacturers m on p.Manufacturer = m.code", conn)
pd.read_sql_query("SELECT avg(p.price), m.code as Manufacturer_code from Products p join manufacturers m on p.Manufacturer = m.code group by m.code", conn)
pd.read_sql_query("SELECT avg(p.price), m.name as Manufacturer_name from Products p join manufacturers m on p.Manufacturer = m.code group by m.name", conn)
pd.read_sql_query("SELECT m.name as Manufacturer_name from Products p join manufacturers m on p.Manufacturer = m.code group by m.name having avg(price) >= 150", conn)
pd.read_sql_query("SELECT name , price from Products order by price ASC limit 1", conn)
pd.read_sql_query("SELECT m.name as manu_name, p.name as product_name,p.price from Products p join manufacturers m on p.Manufacturer = m.code group by m.name having p.price = max(p.price)", conn)
conn.execute("INSERT INTO Products(Code,Name,Price,Manufacturer) VALUES(11,'Loudspeakers',70,2);")
conn.execute("update products set name = 'Laser Printer' where code = 8")
pd.read_sql_query("select name,code, price*90/100, manufacturer from products", conn)
pd.read_sql_query("select name,code, price*90/100, manufacturer from products where price>= 120", conn)<jupyter_output><empty_output>
|
no_license
|
/SQL.ipynb
|
keiseithunder/SQLPractice
| 14 |
<jupyter_start><jupyter_text># 作業
練習以旋轉變換 + 平移變換來實現仿射變換
> 旋轉 45 度 + 縮放 0.5 倍 + 平移 (x+100, y-50)<jupyter_code>import cv2
import time
import numpy as np
img = cv2.imread('lena.png')<jupyter_output><empty_output><jupyter_text>## Affine Transformation - Case 2: any three point<jupyter_code># 給定兩兩一對,共三對的點
# 這邊我們先用手動設定三對點,一般情況下會有點的資料或是透過介面手動標記三個點
rows, cols = img.shape[:2]
pt1 = np.array([[50,50], [300,100], [200,300]], dtype=np.float32)
pt2 = np.array([[80,80], [330,150], [300,300]], dtype=np.float32)
# 取得 affine 矩陣並做 affine 操作
M_affine = cv2.getAffineTransform(pt1,pt2)
img_affine = cv2.warpAffine(img,M_affine,(cols,rows))
# 在圖片上標記點
img_copy = img.copy()
for idx, pts in enumerate(pt1):
pts = tuple(map(int, pts))
cv2.circle(img_copy, pts, 3, (0, 255, 0), -1)
cv2.putText(img_copy, str(idx), (pts[0]+5, pts[1]+5), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 1)
for idx, pts in enumerate(pt2):
pts = tuple(map(int, pts))
cv2.circle(img_affine, pts, 3, (0, 255, 0), -1)
cv2.putText(img_affine, str(idx), (pts[0]+5, pts[1]+5), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 1)
# 組合 + 顯示圖片
img_show_affine = np.hstack((img_copy, img_affine))
cv2.imwrite('affine.png',img_show_affine)<jupyter_output><empty_output>
|
no_license
|
/Day006_affine_HW.ipynb
|
K-F-github/1st-DL-CVMarathon
| 2 |
<jupyter_start><jupyter_text>## Hypothesis (EDA)
1) In the first part, we look at the measure of goodness of a blog - claps
2) Take a look at the tags and their distribution
3) We remove outliers (too short or too long titles and subtitles)<jupyter_code>#Hypothesis 1
ax = sns.kdeplot(df['Claps'])
ax.set(xlabel = 'Number of Claps', ylabel = 'Density')
MAX = 1000
clipped_claps = df['Claps'][df['Claps']<MAX]
sns.kdeplot(clipped_claps)
ax.set(xlabel = 'Number of Claps', ylabel = 'Density')
claps_without_subt = df[df['Subtitle'].isna()]['Claps']
claps_with_subt = df[~df['Subtitle'].isna()]['Claps']
print("Percentage of claps without subtitle: {}".format(len(claps_without_subt)))
print("Percentage of claps with subtitle: {}".format(len(claps_with_subt)))
assert len(claps_with_subt) + len(claps_without_subt) == df.shape[0]
sns.kdeplot(claps_with_subt, bw=0.5, label='With Subtitle')
sns.kdeplot(claps_without_subt, bw=0.5, label='Without Subtitle')
print("Mean Claps with subtitle: ", int(np.mean(claps_with_subt)))
print("Mean Claps without subtitle: ", int(np.mean(claps_without_subt)))
print("Median Claps with subtitle: ", int(np.median(claps_with_subt)))
print("Median Claps without subtitle: ", int(np.median(claps_without_subt)))
print("Std Dev Claps with subtitle: ", int(np.std(claps_with_subt)))
print("Std Dev Claps without subtitle: ", int(np.std(claps_without_subt)))<jupyter_output>Std Dev Claps with subtitle: 986
Std Dev Claps without subtitle: 1183
<jupyter_text>Looks like most of our blogs don't have that many claps.<jupyter_code>df[tag_columns] = df[tag_columns].astype(int)
df['Num_Tags'] = df[tag_columns].apply(lambda x: sum(x), axis=1)
plt.hist(df['Num_Tags'])
pd.DataFrame(df[tag_columns].sum().sort_values(ascending=True)).plot.barh(figsize=(15,20))<jupyter_output><empty_output><jupyter_text>## Analyzing text<jupyter_code>#helper functions
def preprocess(sentence):
try:
tokens = word_tokenize(sentence.lower())
tokens = [token for token in tokens if token not in string.punctuation and token not in stopwords_en]
tokens = [token.translate(str.maketrans('', '', string.punctuation)) for token in tokens]
return ' '.join(tokens)
except:
print(sentence)
print(preprocess("Hello! I am A.I., your assistant"))
title_wordcount = list(map(lambda x: len(preprocess(x).split()), df['Title']))
def basic_stats(num_list):
print("Mean: ", np.mean(num_list))
print("Median: ", np.median(num_list))
print("Mode: ", scipy.stats.mode(num_list)[0][0])
print("For word count in title -")
basic_stats(title_wordcount)
ax = plt.hist(title_wordcount, bins=50)
plt.xlabel("Word Length")
plt.ylabel("Count")
#filter out articles with title wordcount > 15
MAX = 15
count_lt = [count<=MAX for count in title_wordcount]
df = df[count_lt]
print("Number of articles now: ", df.shape[0])
print("Number of articles deleted: ",len([count for count in title_wordcount if count>MAX]))
ax = plt.hist([count for count in title_wordcount if count<=MAX])
plt.xlabel("Word Length")
plt.ylabel("Count")
#Check subtitle
print("Percentage of blogs without subtitles: ", int(100*df['Subtitle'].isnull().sum()/df.shape[0]))
df['Subtitle'] = df['Subtitle'].fillna('')
subtitle_wordcount = list(map(lambda x: len(preprocess(x).split()), df['Subtitle']))
ax = plt.hist(subtitle_wordcount)
plt.xlabel("Word Count")
plt.ylabel("Count")
MAX = 40
count_lt = [count<=40 for count in subtitle_wordcount]
df = df[count_lt]
ax = plt.hist([count for count in subtitle_wordcount if count<=MAX])
plt.xlabel("Word Count")
plt.ylabel("Count")
print("Number of articles now: ", df.shape[0])
print("Number of articles deleted: ",len([count for count in subtitle_wordcount if count>MAX]))
#final step - Joint title-subtitle analysis
df['Joint_Text'] = df[['Title', 'Subtitle']].apply(lambda x: x[0] + ' ' + x[1], axis=1)
print(df.loc[0, 'Title'])
print(df.loc[0, 'Subtitle'])
print(df.loc[0, 'Joint_Text'])
combined_wordcount = list(map(lambda x: len(preprocess(x).split()), df['Joint_Text']))
sns.kdeplot(combined_wordcount)
plt.xlabel("Word count")
plt.ylabel("Density")<jupyter_output><empty_output><jupyter_text>## Checkpoint: we have removed the data we didn't want to predict tags. We now dig deeper into the text<jupyter_code>def advanced_preprocess(sentence):
sentence = re.subn(r'\S+.com', ' ', sentence)[0] #remove url or emials
sentence = re.subn(r'\d+\.\d+', ' ', sentence)[0] #remove floating numbers
sentence = re.subn(r'\d+', ' ', sentence)[0] #remove integers
return sentence
#some more cleaning
def clean_nonaplha(x):
return bool(set(string.ascii_letters).intersection(set(x)))
df = df[df['Joint_Text'].apply(lambda x: clean_nonaplha(x))]
df['Joint_Text'] = df['Joint_Text'].apply(lambda x: preprocess(x))
df['Joint_Text'] = df['Joint_Text'].apply(lambda x: advanced_preprocess(x))
final_columns = ['Joint_Text'] + tag_columns
df[final_columns].to_csv('C:/Users/Dhruvil/Desktop/Projects/medium_search/data/clean.csv')<jupyter_output><empty_output>
|
no_license
|
/ipynb_checkpoints/.ipynb_checkpoints/Medium EDA-checkpoint.ipynb
|
DhruvilKarani/MediumTag
| 4 |
<jupyter_start><jupyter_text># Python Data Structures and Boolean <jupyter_code>print(True,False,True,False)
!pip install ipyparallel
type(True)
#Inbult string functions
name="Solomon"
print(name.isalnum())
print(name.isdigit())
print(name.isalpha())
print(name.isspace())
print(name.istitle())
print(name.endswith('n'))
print(name.startswith('o'))
print(name.isupper())
print(name.islower())<jupyter_output>True
False
True
False
True
True
False
False
False
<jupyter_text># Boolean and Logical Operators<jupyter_code>name.isalpha() or name.istitle()
name.isalpha() and name.istitle()<jupyter_output><empty_output><jupyter_text># Lists
* Is a mutable or changeable ordered sequence of elements<jupyter_code>test=[]
type(test)
test1=["Solomon", "Basket", "School",1,2,34,5,6]
##List Methods
len(test1)
test1.append("append")
test1
test1[3]
test1[1:4]
test1.insert(2,"insert")
test1<jupyter_output><empty_output><jupyter_text># Extend<jupyter_code>test1.extend([1,45,56,67,'Extend'])
test1<jupyter_output><empty_output><jupyter_text># List Operations<jupyter_code>sum(test1[4:6])
test1.pop() # Shows the value been removed as results
test1 # element removed
test1.pop(0)
test1.count(2) # Counnt the element been passed as argument in the list. Example below counts the number 2 in the list
count(test1)
test1*2<jupyter_output><empty_output>
|
no_license
|
/.ipynb_checkpoints/01.Lists and Boolean Variables-checkpoint.ipynb
|
kwabena55/Complete-ML-Crash-Course
| 5 |
<jupyter_start><jupyter_text># Bivariate Analysis
- The idea is to analyze two variables at the same and find any relation between them.
- One way is to use correlation coefficients to find if two columns are related or not
- It provides a broader perspective as compared to univariate analysis
#### Graphs
- Scatter Plots
- Mosaic Plots
- Histograms / bar plots
- Heatmaps
- Linecharts, and so on
### Correlation Coefficient

##### Pearson Correlation Coefficient

##### Spearman's Correlation Coefficient

##### Kendal Tau's Correlation Coefficient

### Scatter Plots and Heatmaps<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
x1 = np.random.normal(10,1,200)*10
x1
sns.distplot(x1)<jupyter_output><empty_output><jupyter_text>#### Negative Correlation with x1<jupyter_code>y1 = 100 - x1
ax = sns.scatterplot(x1,y1,)
ax.set(xlabel='x1',ylabel='y1')
plt.show()
from scipy.stats import pearsonr
pearsonr(x1,y1)<jupyter_output><empty_output><jupyter_text>This is almost perfect negative correlation
#### Positive Correlation
###### In real world, you would probably see more noise in your data<jupyter_code>x2 = np.random.normal(10,1,200)*10
y2 = x2 + np.random.normal(40,5.2,200)
ax = sns.scatterplot(x2,y2,)
ax.set(xlabel='x2',ylabel='y2')
plt.show()
pearsonr(x2,y2)<jupyter_output><empty_output><jupyter_text>### Heatmaps<jupyter_code>x3 = np.random.random(200)
y3 = x1+x3 - 20
x4 = np.random.normal(100,1.5,200)
y4 = x4 + x1 + x2
data_df = pd.DataFrame({'x1':x1,'x2':x2,'x3':x3,'x4':x4,'y1':y1,'y2':y2,'y3':y3,'y4':y4})
data_df.head()
sns.pairplot(data_df)<jupyter_output><empty_output><jupyter_text>## Use pandas corr() function to find correlation<jupyter_code>data_df.corr()
sns.set(rc={'figure.figsize':(8,8)})
ax = sns.heatmap(data_df.corr(),annot=False,linewidths=1,fmt='.2f')
sns.set(rc={'figure.figsize':(8,8)})
ax = sns.heatmap(data_df.corr(),annot=True,linewidths=1,fmt='.2f')
sns.set(rc={'figure.figsize':(8,8)})
ax = sns.heatmap(data_df.corr(method='spearman'),annot=True,linewidths=1,fmt='.2f')<jupyter_output><empty_output><jupyter_text>##### Interesting links for Correlation Coefficients:
- https://stats.stackexchange.com/questions/8071/how-to-choose-between-pearson-and-spearman-correlation
- https://support.minitab.com/en-us/minitab-express/1/help-and-how-to/modelling-statistics/regression/supporting-topics/basics/a-comparison-of-the-pearson-and-spearman-correlation-methods/
- https://stats.stackexchange.com/questions/344737/how-to-know-whether-pearsons-or-spearmans-correlation-is-better-to-use
#### Quick Recap
- We can use corr() function in pandas DataFrames, to find correlation between different variables of DataFrames
- We can visualize correlation between multiple variables together via heatmaps
References:
- https://seaborn.pydata.org/generated/seaborn.scatterplot.html
- https://seaborn.pydata.org/generated/seaborn.heatmap.html
- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.corr.html
#### Bivariate Analysis for Titanic Dataset<jupyter_code>titanic_data_df = pd.read_csv('train.csv')
titanic_data_df.columns
g = sns.countplot(x='Sex',hue='Survived',data=titanic_data_df)
g = sns.catplot(x='Embarked',col='Survived',data=titanic_data_df,kind='count',height=4,aspect=.7)
g = sns.countplot(x='Embarked',hue='Survived',data=titanic_data_df)
g = sns.countplot(x='Embarked',hue='Pclass',data=titanic_data_df)
g = sns.countplot(x='Pclass',hue='Survived',data=titanic_data_df)<jupyter_output><empty_output><jupyter_text>#### Add a new column - Family size
I will be adding a new column 'Family Size' which will be the SibSp and Parch + 1<jupyter_code>def add_family(df):
df['FamilySize']=df['SibSp']+df['Parch']+1
return df
titanic_data_df = add_family(titanic_data_df)
titanic_data_df.head(10)
g = sns.countplot(x='FamilySize',hue='Survived',data=titanic_data_df)
g = sns.countplot(x='FamilySize',hue='Sex',data=titanic_data_df)<jupyter_output><empty_output><jupyter_text>#### Add a new column - Age Group<jupyter_code>age_df = titanic_data_df[~titanic_data_df['Age'].isnull()]
age_bins = ['0-9','10-19','20-29','30-39','40-49','50-59','60-69','70-79']
age_df['ageGroup'] = pd.cut(titanic_data_df.Age,range(0,81,10),right=False,labels=age_bins)
sns.countplot(x='ageGroup',hue='Survived',data=age_df)<jupyter_output><empty_output><jupyter_text>### Bivariate Analysis for Video Games Sales dataset<jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
data_df = pd.read_csv('Video_Games_Sales_as_at_22_Dec_2016.csv')
data_df.columns
data_df.head()
data_df.info()
data_df.describe()<jupyter_output><empty_output><jupyter_text>The most obvious thing that comes to mind is that, is any column related to global sales for a Game?<jupyter_code>sns.pairplot(data_df)
sns.set(rc={'figure.figsize':(8,8)})
sns.heatmap(data_df.corr(),annot=True,fmt='.2f')<jupyter_output><empty_output><jupyter_text>#### Lets try focusing on Sales Related Columns only<jupyter_code>sns.heatmap(data_df[['NA_Sales','EU_Sales','JP_Sales','Other_Sales','Global_Sales']].corr(),annot=True,fmt='.2f')
ax = sns.scatterplot(x=data_df['Critic_Score'],y=data_df['User_Score'])
ymin,ymax = ax.get_ylim()
ax.set_yticks(np.round(np.linspace(ymin,ymax,25,2)))
plt.tight_layout()
plt.locator_params(axis='y',nbins=6)
plt.show()
score_df = data_df[['Critic_Score','User_Score']]
score_df = score_df[score_df['User_Score']!='tbd']
score_df['User_Score'] = pd.to_numeric(score_df['User_Score'],errors='coerce')
score_df.dropna(how='any',inplace=True)
score_df.info()
sns.scatterplot(x=score_df['Critic_Score'],y=score_df['User_Score'])
score_df.corr()
score_df.corr(method='spearman')<jupyter_output><empty_output><jupyter_text>#### Lets move on to Genre<jupyter_code>genre_group = data_df.groupby('Genre').size()
genre_group.plot.bar()
data_df['Rating'].unique
g = sns.catplot(x='Genre',hue='Rating',data=data_df,kind='count',height=10)
count_year_gen = pd.DataFrame({'count':data_df.groupby(['Genre','Year_of_Release']).size()}).reset_index()
print(data_df.groupby(['Genre','Year_of_Release']).size())<jupyter_output><empty_output><jupyter_text>#### Release by Genre<jupyter_code>ax = sns.boxplot(x='count',y='Genre',data=count_year_gen,whis=np.inf)
sns.set(rc={'figure.figsize':(20,8)})
ax = sns.lineplot(x='Year_of_Release',y='count',hue='Genre',data=count_year_gen)<jupyter_output><empty_output><jupyter_text>##### Code Modifed from :
https://bokeh.pydata.org/en/latest/docs/user_guide/interaction/legends/htmls<jupyter_code>from bokeh.palettes import Spectral11
from bokeh.plotting import figure, output_file,show
from bokeh.models import Legend, LegendItem
p = figure(plot_width=800,plot_height=550)
p.background_fill_color='beige'
p.title.text = 'Click on legend entries to hide the corresponding lines'
import random
legend_list = []
for genre_id in count_year_gen['Genre'].unique():
color = random.choice(Spectral11)
df = pd.DataFrame(count_year_gen[count_year_gen['Genre']==genre_id])
p.line(df['Year_of_Release'],df['count'],line_width=2,alpha=0.8,color=color,legend=genre_id)
p.legend.location='top_left'
p.legend.click_policy='hide'
show(p)<jupyter_output><empty_output><jupyter_text>#### Sales per Genre per region<jupyter_code>genre_region_na = pd.DataFrame({'Sales':data_df.groupby('Genre')['NA_Sales'].sum()}).reset_index()
sns.barplot(x='Genre',y='Sales',data=genre_region_na)
genre_region_eu = pd.DataFrame({'Sales':data_df.groupby('Genre')['EU_Sales'].sum()}).reset_index()
sns.barplot(x='Genre',y='Sales',data=genre_region_eu)
genre_region_jp = pd.DataFrame({'Sales':data_df.groupby('Genre')['JP_Sales'].sum()}).reset_index()
sns.barplot(x='Genre',y='Sales',data=genre_region_jp)
genre_region_other = pd.DataFrame({'Sales':data_df.groupby('Genre')['Other_Sales'].sum()}).reset_index()
sns.barplot(x='Genre',y='Sales',data=genre_region_other)
platform_group = pd.DataFrame({'Sales':data_df.groupby(['Platform','Year_of_Release']).sum()}).reset_index()
print(data_df.groupby(['Platform','Year_of_Release']).size())
data_df.groupby('Genre')['Publisher'].apply(lambda x:x.value_counts().index[0])
# Genre by Critics Score
critic_genre_score = pd.DataFrame({'Score':data_df.groupby('Genre')['Critic_Score'].median()})
sns.barplot(x='Genre',y='Score',data=critic_genre_score)<jupyter_output><empty_output>
|
no_license
|
/Bivariate Analysis.ipynb
|
sureshmanem/DS_Algorithms
| 15 |
<jupyter_start><jupyter_text>## This is a notebook which will be mainly used for the Capstone project<jupyter_code>import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")<jupyter_output>Hello Capstone Project Course!
|
no_license
|
/Capstone_FK.ipynb
|
chegeo/Coursera_Capstone
| 1 |
<jupyter_start><jupyter_text># Preliminary Data Analysis#### Summary:
1. Importing Dependencies
2. Pull CSV File from previous work
3. Merge data
4. Graph data to get sum of viewers for each game
5. Show the dataframe for the counts of each game played by Streamer### 1. Importing Dependencies:<jupyter_code>import pandas as pd
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>### 2. Pull CSV File<jupyter_code># Pulling top 50 data from csv file in Data
df_top_50 = pd.read_csv('Data/streamer_game_data.csv')
# Pulling the top games data from csv file in Data
df_top_games = pd.read_csv('Data/top_game_ranking.csv')
<jupyter_output><empty_output><jupyter_text>### 3. Merge Data<jupyter_code>df_merged = df_top_50.merge(df_top_games, on = "Game Name")
df_merged.head()<jupyter_output><empty_output><jupyter_text>### 4. Graph Data<jupyter_code># We want to get the sum of the average viewers for each game
df_graph = df_merged.groupby('Game Name').sum()
# Sort the dataframe based on average viewers
df_graph = df_graph.sort_values(by=['Average viewers'], ascending = False)
# Reset the index
df_graph = df_graph.reset_index()
# Filter out unnecessary columns
df_graph = df_graph[["Game Name", "Average viewers", "Followers"]]
# Re-merge the rank of each games
df_graph = df_graph.merge(df_top_games, on = "Game Name")
df_graph
# Plotting the dataframe
x_axis = df_graph['Game Name']
y_axis = df_graph['Average viewers']
#Create the bar chat
plt.figure(figsize=(20,10))
plt.bar(x_axis, y_axis, color='b', alpha=0.5, align="center")
plt.xticks(rotation = 'vertical')
plt.title("Top 50 Streamers Game Views")
plt.xlabel("Game Name")
plt.ylabel("Average Viewers");<jupyter_output><empty_output><jupyter_text>### 5. Showing Counts of streamers in dataframe<jupyter_code># Get the count of the streamer based on each game
df_streamer_count = df_merged.groupby('Game Name').count()
# Sort the values in descending order
df_streamer_count = df_streamer_count.sort_values(by = 'Channel', ascending = False)
# Delete all unneccesary columns
df_streamer_count = df_streamer_count[['Channel']]
# Rename the column
df_streamer_count = df_streamer_count.rename(columns = {'Channel':'Number of Streamers'})
df_streamer_count<jupyter_output><empty_output>
|
no_license
|
/.ipynb_checkpoints/3-Preliminary Data Analysis-checkpoint.ipynb
|
asoemardy/great-gaming-googlers
| 5 |
<jupyter_start><jupyter_text># Exploratory AnalysisImporting basic libraries :<jupyter_code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline<jupyter_output><empty_output><jupyter_text>Importing training data<jupyter_code>train = pd.read_csv("../../data/train.csv")<jupyter_output><empty_output><jupyter_text>Preview of this data<jupyter_code>train.head(20).transpose()<jupyter_output><empty_output><jupyter_text>Overall statistics of all columns.
Can some columns be dropped because they don't add any value?<jupyter_code>train.describe().transpose()<jupyter_output><empty_output><jupyter_text>As a naive start, I would get rid of all the records which are having even field as NA. This would greatly reduce the overall data size and would also provide a good baseline number.
Also easier to handle.After dropping NA columns, Printing new summary<jupyter_code>train = train.dropna();
train.describe().transpose()<jupyter_output><empty_output><jupyter_text>How many data points have rainfall above 300? Anything above that seems odd, given the percentiles. Will get rid of those.<jupyter_code>train.drop(train[train['Expected'] > 500].index).describe().transpose()<jupyter_output><empty_output><jupyter_text>With these remaining data points we will continue with our study.<jupyter_code>train = train.drop(train[train['Expected'] > 500].index)<jupyter_output><empty_output><jupyter_text>## Analyze expected rainfall<jupyter_code>plt.figure()
plt.hist(train['Expected'].as_matrix(), bins = 1000)
#plt.hist(log)
plt.title("Expected Rainfall histogram")
plt.xlabel("Expected Rain")
plt.ylabel("Frequency")
plt.show()<jupyter_output><empty_output><jupyter_text>Export this data out for learning<jupyter_code>train.to_csv("../../data/train_processed.csv",index=False)<jupyter_output><empty_output>
|
no_license
|
/src/notebook/How much did it rain II ?.ipynb
|
wrecker-mishra/raindetect
| 9 |
<jupyter_start><jupyter_text># Algorithms
# Homework 1
Vanessa Wormer
UNI vw2210# 1. Write a function that takes in a list of numbers and outputs the mean of the numbers using the formula for mean. Do this without any built-in functions like sum(), len(), and, of course, mean()<jupyter_code>n = [2,3,4,5]
def average(n):
list_sum = 0
list_count = 0
for i in n:
list_sum = list_sum + i
list_count = list_count + 1
return list_sum/list_count
average(n)<jupyter_output><empty_output><jupyter_text># 2. Create your own version of the Mayoral Excuse Machine (http://dnain.fo/1CCHKmI) in Python that takes in a name and location, selects an excuse at random and prints an excuse (“Sorry, Richard, I was late to City Hall to meet you, I had a very rough night and woke up sluggish”). Use the “excuses.csv” in the Github repository. Extra credit if you print the link to the story as well.<jupyter_code>import csv
import random
myfile = open("excuse.csv", 'rU')
excuses = list(csv.DictReader(myfile))
#def excuse(name, location):
# return "Sorry " + name + " I was late to " + location + " to meet you " + line["excuse"]
#print excuse("Vanessa", "Paris")
name = raw_input("Enter your name: ")
location = raw_input("Enter your location: ")
def excuse(name, location):
random_excuse = random.choice(excuses)
return "Sorry "+ name +" I was late at the "+ location+ " because "+ random_excuse["excuse"]+" and that's the full story: "+ random_excuse["hyperlink"]
print excuse(name, location)<jupyter_output>Enter your name: Vanessa
Enter your location: Uni
Sorry Vanessa I was late at the Uni because we had some meetings at Gracie Mansion and that's the full story: http://www.dnainfo.com/new-york/20150307/belle-harbor/de-blasio-30-minutes-late-rockaway-st-patricks-day-parade
<jupyter_text># 3. The following code will print the prime numbers between 1 and 100. Modify the code so it prints every other prime number from 1 to 100<jupyter_code>l = []
for num in range(1,101): # for-loop through the numbers
prime = True # boolean flag to check the number for being prime
for i in range(2,num): # for-loop to check for "primeness" by checking for divisors other than 1
if (num%i==0): # logical test for the number having a divisor other than 1 and itself
prime = False # if there's a divisor, the boolean value gets flipped to False
if prime: # if prime is still True after going through all numbers from 1 - 100, then it gets printed
l.append(num)
print l[::3]<jupyter_output>[1, 5, 13, 23, 37, 47, 61, 73, 89]
<jupyter_text># Extra Credit: Can you write a procedure that runs faster than the one above?<jupyter_code>l = ()<jupyter_output><empty_output><jupyter_text># The writer of this code wants to count the mean and median article length for recent articles on gay marriage in the New York Times. This code has several issues, including errors. When they checked their custom functions against the numpy functions, they noticed some discrepancies. Fix the code so it executes properly, retrieves the articles, and outputs the correct result from the custom functions, compared to the numpy functions.<jupyter_code>import requests # a better package than urllib2
def my_mean(input_list):
list_sum = 0
list_count = 0
for el in input_list:
list_sum += int(el)
list_count += 1
return list_sum / list_count, list_count
def my_median(input_list):
list_length = len(input_list)
if len(input_list)%2==0:
return (sorted(input_list)[((list_length)/2)]+sorted(input_list)[((list_length)/2)-1])/2
else:
return sorted(input_list)[(list_length)/2]
api_key = "ffaf60d7d82258e112dd4fb2b5e4e2d6:3:72421680"
url = "http://api.nytimes.com/svc/search/v2/articlesearch.json?q=gay+marriage&api-key=%s" % api_key
r = requests.get(url)
wc_list = []
for article in r.json()['response']['docs']:
wc_list.append(int(article['word_count']))
my_mean(wc_list)
import numpy as np
np.mean(wc_list)
my_median(wc_list)
np.median(wc_list)<jupyter_output><empty_output>
|
non_permissive
|
/class1_1gexercise/algorithms_hw1_vanessa.ipynb
|
vwormer/lede_algorithms
| 5 |
<jupyter_start><jupyter_text>## 演示0105:数组排序### 案例1:一维数组排序> **升序和降序排列**
* *sort*只能升序排列,如果需要降序,可以再倒序<jupyter_code>import numpy as np
a = np.array([3, 1, 7, 4, 2, 5, 8])
b = np.sort(a)
print(b)
print(b[::-1]) # 使用[::-1]进行倒序遍历<jupyter_output>[1 2 3 4 5 7 8]
[8 7 5 4 3 2 1]
<jupyter_text>> ** *np.sort(a)*和*a.sort*的不同行为 **
* *np.sort(a)*不修改a的值,而是返回一个拷贝,该拷贝是排序后的
* *a.sort()*返回None,它将直接在a上进行排序<jupyter_code>a = np.array([3, 1, 7, 4, 2, 5, 8])
b = np.sort(a)
print(a)
a.sort()
print(a)<jupyter_output>[3 1 7 4 2 5 8]
[1 2 3 4 5 7 8]
<jupyter_text>### 案例2:二维数组排序>** 按行或案列排序**
* 指定*axis=0*,表示沿竖直方向(行索引增加的方向),对每一列分别排序
* 指定*axis=1*,表示沿水平方向(列索引增加的方向),对每一行分别排序
* 未指定*axis*时,行为等同于*axis=1*<jupyter_code>a = np.array([[1,4,3],[3,1,8], [2,0,7]])
print(np.sort(a))
print(np.sort(a, axis=0))
print(np.sort(a, axis=1))<jupyter_output>[[1 3 4]
[1 3 8]
[0 2 7]]
[[1 0 3]
[2 1 7]
[3 4 8]]
[[1 3 4]
[1 3 8]
[0 2 7]]
<jupyter_text>### 案例3:数据分区>** 根据某个边界数,将数组中的数据分成两个区,左区的值均小于边界数,右区的值均大于边界数**
* $np.partition$函数用于执行分区操作。该函数指定边界元素的下标索引号,分区完成后,该下标索引号左边的元素值都小于该边界元素,右边的元素都大于该边界元素
* 左区和右区内部的数据并不会进行排序<jupyter_code>a = np.array([30, 80, 70, 100, 50, 40, 20, 90])
b = np.partition(a, 0) # 分区完成后,0号索引位置右边的元素都大于该位置元素值
c = np.partition(a, 6) # 分区完成后,6号索引位置左边的元素都小于该位置元素值;右边的元素都大于该位置元素值
print(b) # 0号索引位置元素值为20
print(c) # 6号索引位置元素值为90<jupyter_output>[ 20 80 70 100 50 40 30 90]
[ 20 30 70 80 50 40 90 100]
|
permissive
|
/01_Numpy/0105_数组排序.ipynb
|
iahuohz/Machine-Learning-Lab
| 4 |
<jupyter_start><jupyter_text># AssignmentQ1. Write the NumPy program to create a 2d array with 6 on the border
and 0 inside?
Expected OutputOriginal array-
[ [6 6 6 6 6]
[ 6 6 6 6 6]
[ 6 6 6 6 6 ]
[ 6 6 6 6 6 ]
[ 6 6 6 6 ] ].
6 on the border and 0 inside in the array-
[[ 6 6 6 6 6]
[ 6 0 0 0 6]
[ 6 0 0 0 6]
[ 6 0 0 0 6]
[ 6 6 6 6 6]].<jupyter_code>import numpy as np
x = np.ones((5,5))
print("Original array:")
print(x)
print("1 on the border and 0 inside in the array")
x[1:-1,1:-1] = 0
print(x)<jupyter_output>Original array:
[[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]]
1 on the border and 0 inside in the array
[[1. 1. 1. 1. 1. 1.]
[1. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 1.]
[1. 1. 1. 1. 1. 1.]]
<jupyter_text>Q2. Write the NumPy program to create a 8x8 matrix and fill it with the
checkerboard pattern?
Checkerboard pattern-
[[3 9 3 9 3 9 3 9]
[9 3 9 3 9 3 9 3]
[3 9 3 9 3 9 3 9]
[9 3 9 3 9 3 9 3]
[3 9 3 9 3 9 3 9]
[9 3 9 3 9 3 9 3]
[3 9 3 9 3 9 3 9]
[9 3 9 3 9 3 9 3]].<jupyter_code>import numpy as np
x = np.ones((3,3))
print("Checkerboard pattern:")
x = np.zeros((8,8),dtype=int)
x[1::2,::2] = 1
x[::2,1::2] = 1
print(x)<jupyter_output>Checkerboard pattern:
[[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]]
<jupyter_text>Q3. Write the NumPy program to create an empty and a full array.
Expected Output-
[[4.45057637e-308 1.78021527e-306 8.45549797e-307 1.37962049e-306]
[1.11260619e-306 1.78010255e-306 9.79054228e-307 4.45057637e-308]
[8.45596650e-307 9.34602321e-307 4.94065646e-322 0.00000000e+000]]
[[6 6 6]
[6 6 6]
[6 6 6]]
<jupyter_code>import numpy as np
x = np.empty((3,4))
print(x)
y = np.full((3,3),6)
print(y) <jupyter_output>[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
[[6 6 6]
[6 6 6]
[6 6 6]]
<jupyter_text>Q4. Write the NumPy program to convert the values of Centigrade
degrees into the Fahrenheit degrees and the centigrade values are
stored in the NumPy array.
Sample Array -[0, 12, 45.21 ,34, 99.91]
Expected OutputValues in Fahrenheit degrees-
[ 0. 12. 45.21 34. 99.91]
Values in Centigrade degrees-
[-17.77777778 -11.11111111 7.33888889 1.11111111 37.72777778]<jupyter_code>
import numpy as np
fvalues = [0, 12, 45.21, 34, 99.91]
F = np.array(fvalues)
print("Values in Fahrenheit degrees:")
print(F)
print("Values in Centigrade degrees:")
print(5*F/9 - 5*32/9) <jupyter_output>Values in Fahrenheit degrees:
[ 0. 12. 45.21 34. 99.91]
Values in Centigrade degrees:
[-17.77777778 -11.11111111 7.33888889 1.11111111 37.72777778]
<jupyter_text>Q5. Write the NumPy program to find the real and imaginary parts of an
array of complex numbers?
Expected OutputOriginal array [ 1.00000000+0.j 0.70710678+0.70710678j]
Real part of the array-
[ 1. 0.70710678]
Imaginary part of the array-
[ 0. 0.70710678]<jupyter_code>import numpy as np
x = np.sqrt([1+0j])
y = np.sqrt([0+1j])
print("Original array:x ",x)
print("Original array:y ",y)
print("Real part of the array:")
print(x.real)
print(y.real)
print("Imaginary part of the array:")
print(x.imag)
print(y.imag) <jupyter_output>Original array:x [1.+0.j]
Original array:y [0.70710678+0.70710678j]
Real part of the array:
[1.]
[0.70710678]
Imaginary part of the array:
[0.]
[0.70710678]
<jupyter_text>Q6. Write the NumPy program to test whether each element of a 1-D
array is also present in the second array?
Expected OutputArray1: [ 0 10 20 40 60]
Array2: [0, 40]
Compare each element of array1 and array2
[ True False False True False]
<jupyter_code>import numpy as np
array1 = np.array([0, 10, 20, 40, 60])
print("Array1: ",array1)
array2 = [0, 40]
print("Array2: ",array2)
print("Compare each element of array1 and array2")
print(np.in1d(array1, array2)) <jupyter_output>Array1: [ 0 10 20 40 60]
Array2: [0, 40]
Compare each element of array1 and array2
[ True False False True False]
<jupyter_text>Q7. Write the NumPy program to find common values between two
arrays?
Expected OutputArray1: [ 0 10 20 40 60]
Array2: [10, 30, 40]
Common values between two arrays-
[10 40]<jupyter_code>import numpy as np
array1 = np.array([0, 10, 20, 40, 60])
print("Array1: ",array1)
array2 = np.array([10, 30, 40])
print("Array2: ",array2)
print("Common values between two arrays:")
print(np.intersect1d(array1, array2))<jupyter_output>Array1: [ 0 10 20 40 60]
Array2: [10 30 40]
Common values between two arrays:
[10 40]
<jupyter_text>Q8. Write the NumPy program to get the unique elements of an array?
Expected OutputOriginal array-
[10 10 20 20 30 30]
Unique elements of the above array-
[10 20 30]
P a g e 14 | 44
Original array-
[[1 1]
[2 3]]
Unique elements of the above array-
[1 2 3]<jupyter_code>import numpy as np
x = np.array([10, 10, 20, 20, 30, 30])
print("Original array:")
print(x)
print("Unique elements of the above array:")
print(np.unique(x))
x = np.array([[1, 1], [2, 3]])
print("Original array:")
print(x)
print("Unique elements of the above array:")
print(np.unique(x))<jupyter_output>Original array:
[10 10 20 20 30 30]
Unique elements of the above array:
[10 20 30]
Original array:
[[1 1]
[2 3]]
Unique elements of the above array:
[1 2 3]
<jupyter_text>Q9. Write the NumPy program to find the set exclusive-or of two arrays.
Set exclusive-or will return the sorted, unique values that are in
only one (not both) of the input arrays?
Array1- [ 0 10 20 40 60 80]
Array2- [10, 30, 40, 50, 70]
Unique values that are in only one (not both) of the input arrays-
[ 0 20 30 50 60 70 80]<jupyter_code>import numpy as np
array1 = np.array([0, 10, 20, 40, 60, 80])
print("Array1: ",array1)
array2 = [10, 30, 40, 50, 70]
print("Array2: ",array2)
print("Unique values that are in only one (not both) of the input arrays:")
print(np.setxor1d(array1, array2)) <jupyter_output>Array1: [ 0 10 20 40 60 80]
Array2: [10, 30, 40, 50, 70]
Unique values that are in only one (not both) of the input arrays:
[ 0 20 30 50 60 70 80]
<jupyter_text>Q10. Write the NumPy program to test if all elements in an array
evaluate to True ?
Note: 0 evaluates to False in NumPy<jupyter_code>import numpy as np
print(np.all([[True,False],[True,True]]))
print(np.all([[True,True],[True,True]]))
print(np.all([10, 20, 0, -50]))
print(np.all([10, 20, -50])) <jupyter_output>False
True
False
True
<jupyter_text>Q11. Write the NumPy program to test whether any array element along
the given axis evaluates to True?
Note: 0 evaluates to False in NumPy<jupyter_code>import numpy as np
print(np.any([[False,False],[False,False]]))
print(np.any([[True,True],[True,True]]))
print(np.any([10, 20, 0, -50]))
print(np.any([10, 20, -50])) <jupyter_output>False
True
True
True
<jupyter_text>Q12. Write the NumPy program to construct an array by repeating?
Sample array- [1, 2, 3, 4]
Expected OutputOriginal array
[1, 2, 3, 4]
Repeating 2 times
[1 2 3 4 1 2 3 4]
Repeating 3 times
[1 2 3 4 1 2 3 4 1 2 3 4]<jupyter_code>import numpy as np
a = [1, 2, 3, 4]
print("Original array")
print(a)
print("Repeating 2 times")
x = np.tile(a, 2)
print(x)
print("Repeating 3 times")
x = np.tile(a, 3)
print(x)<jupyter_output>Original array
[1, 2, 3, 4]
Repeating 2 times
[1 2 3 4 1 2 3 4]
Repeating 3 times
[1 2 3 4 1 2 3 4 1 2 3 4]
<jupyter_text>Q13. Write the NumPy program to find the indices of the maximum and
minimum values with the given axis of an array?
Original array- [1 2 3 4 5 6]
Maximum Values- 5
Minimum Values- 0<jupyter_code>import numpy as np
x = np.array([1, 2, 3, 4, 5, 6])
print("Original array: ",x)
print("Maximum Values: ",np.argmax(x))
print("Minimum Values:",np.argmin(x))<jupyter_output>Original array: [1 2 3 4 5 6]
Maximum Values: 5
Minimum Values: 0
<jupyter_text>Q14. Write the NumPy program compare two arrays using numpy?
Array a- [1 2]
Array b- [4 5]
a > b
[False False]
a >= b
[False False]
a < b
[ True True]
a <= b
[ True True]
<jupyter_code>import numpy as np
a = np.array([1, 2])
b = np.array([4, 5])
print("Array a: ",a)
print("Array b: ",b)
print("a > b")
print(np.greater(a, b))
print("a >= b")
print(np.greater_equal(a, b))
print("a < b")
print(np.less(a, b))
print("a <= b")
print(np.less_equal(a, b)) <jupyter_output>Array a: [1 2]
Array b: [4 5]
a > b
[False False]
a >= b
[False False]
a < b
[ True True]
a <= b
[ True True]
<jupyter_text>Q15. Write the NumPy program to sort an along the first, last axis of an
array?
Sample array- [[2,5],[4,4]]
Expected OutputOriginal array:
[[4 6]
[2 1]]
Sort along the first axis:
[[2 1]
[4 6]]
Sort along the last axis-
[[1 2]
[4 6]]<jupyter_code>import numpy as np
a = np.array([[4, 6],[2, 1]])
print("Original array: ")
print(a)
print("Sort along the first axis: ")
x = np.sort(a, axis=0)
print(x)
print("Sort along the last axis: ")
y = np.sort(x, axis=1)
print(y) <jupyter_output>Original array:
[[4 6]
[2 1]]
Sort along the first axis:
[[2 1]
[4 6]]
Sort along the last axis:
[[1 2]
[4 6]]
<jupyter_text>Q16. Write the NumPy program to sort pairs of first name and last name
return their indices (first by last name, then by first name).
first_names - ( Betsey, Shelley, Lanell, Genesis, Margery )
last_names - ( Battle, Brien, Plotner, Stahl, Woolum )
Expected Output-
[1 3 2 4 0]<jupyter_code>import numpy as np
first_names = ('Margery', 'Betsey', 'Shelley', 'Lanell', 'Genesis')
last_names = ('Woolum', 'Battle', 'Plotner', 'Brien', 'Stahl')
x = np.lexsort((first_names, last_names))
print(x) <jupyter_output>[1 3 2 4 0]
<jupyter_text>Q17. Write the NumPy program to get the values and indices of the
elements that are bigger than 10 in the given array?
Original array-
[[ 0 10 20]
[20 30 40]]
Values bigger than 10 = [20 20 30 40]
Their indices are (array([0, 1, 1, 1]), array([2, 0, 1, 2]))<jupyter_code>import numpy as np
x = np.array([[0, 10, 20], [20, 30, 40]])
print("Original array: ")
print(x)
print("Values bigger than 10 =", x[x>10])
print("Their indices are ", np.nonzero(x > 10)) <jupyter_output>Original array:
[[ 0 10 20]
[20 30 40]]
Values bigger than 10 = [20 20 30 40]
Their indices are (array([0, 1, 1, 1], dtype=int64), array([2, 0, 1, 2], dtype=int64))
<jupyter_text>Q18. Write the NumPy program to find the memory size of a NumPy
array?
Expected Output128 bytes
<jupyter_code>import numpy as np
n = np.zeros((4,4))
print("%d bytes" % (n.size * n.itemsize)) <jupyter_output>128 bytes
|
no_license
|
/Subjective Assignment - 5 - Numpy 1(Solution).ipynb
|
Chandan010298/Deep-Learning
| 18 |
<jupyter_start><jupyter_text>\hfill Department of Staitistics
\hfill Jaeyeong Kim
# madelon dataset
## Load the dataset<jupyter_code>from sklearn import tree
import pandas as pd
import matplotlib.pyplot as plt
#This will be used to bold characters
class color:
BOLD = '\033[1m'
END = '\033[0m'
X_train = pd.read_csv('madelon\madelon_train.data', header = None,\
delimiter = ' ').dropna(axis='columns')
#drop 501th column with NA values
print(color.BOLD + 'Summary of the basic information about X_train' + color.END)
print(X_train.info())
Y_train = pd.read_csv('madelon\madelon_train.labels', header = None)
print(color.BOLD + '\n\nSummary of the basic information about Y_train' + color.END)
print(Y_train.info())
X_valid = pd.read_csv('madelon\madelon_valid.data', header = None, \
delimiter = ' ').dropna(axis='columns')
Y_valid = pd.read_csv('madelon\madelon_valid.labels', header = None)<jupyter_output>[1mSummary of the basic information about X_train[0m
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2000 entries, 0 to 1999
Columns: 500 entries, 0 to 499
dtypes: int64(500)
memory usage: 7.6 MB
None
[1m
Summary of the basic information about Y_train[0m
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2000 entries, 0 to 1999
Data columns (total 1 columns):
0 2000 non-null int64
dtypes: int64(1)
memory usage: 15.7 KB
None
<jupyter_text>
## Train the dataset<jupyter_code>x = []
for i in range(12):
#Train decision tree with max depth from 1 to 12
clf = tree.DecisionTreeClassifier(criterion = 'entropy', max_depth = i+1)
clf.fit(X_train, Y_train)
#Calculate decision accuracy score of training dataset and validation dataset
x1 = clf.score(X_train, Y_train)
x2 = clf.score(X_valid, Y_valid)
#Save the errors
x.append([i+1, 1 - x1, 1 - x2])
x = pd.DataFrame(data = x, columns = ['Depth', 'Train', 'Valid'])<jupyter_output><empty_output><jupyter_text>## Tree Depth vs Misclassification Errors<jupyter_code># Plot the results
plt.plot(x['Depth'], x['Train'], 'r--', x['Depth'], x['Valid'])
plt.xlabel('Depth')
plt.ylabel('Misclassification errors')
plt.legend(('Train', 'Valid'))
plt.show()
def highlight_min(s):
'''
highlight the minimum in a Series yellow.
'''
is_min = s == s.min()
return ['background-color: yellow' if v else '' for v in is_min]
# Make a table
print(color.BOLD + "Depth vs Misclassification errors" + color.END)
print(x)
print(color.BOLD + "\nThe minimum misclassification error" + color.END)
print(x.loc[x['Valid'].idxmin()])
from sklearn.tree import export_graphviz
import pydotplus
from IPython.display import Image
# Visualize how the decision tree works
dot_data = export_graphviz(clf, out_file=None, filled=True, rounded=True, \
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data)
Image(graph.create_png())<jupyter_output><empty_output><jupyter_text>As the maximum depth increases, misclassification error decreases steadily in the training dataset. However, misclassification error of the validation dataset has a minimum value when the maximum depth is 5. This is because the model is overfitting when the maximum depth is greater than 5.
# wilt dataset
## Load the dataset<jupyter_code>X_train = pd.read_csv('wilt\wilt_train.csv', header = None)
print(color.BOLD + 'Summary of the basic information about X_train' + color.END)
print(X_train.info())
Y_train = pd.read_csv('wilt\wilt_train.labels', header = None)
print(color.BOLD + 'Summary of the basic information about Y_train' + color.END)
print(Y_train.info())
X_test = pd.read_csv('wilt\wilt_test.csv', header = None)
print(color.BOLD + 'Summary of the basic information about X_train' + color.END)
print(X_test.info())
Y_test = pd.read_csv('wilt\wilt_test.labels', header = None)<jupyter_output>[1mSummary of the basic information about X_train[0m
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4339 entries, 0 to 4338
Data columns (total 5 columns):
0 4339 non-null float64
1 4339 non-null float64
2 4339 non-null float64
3 4339 non-null float64
4 4339 non-null float64
dtypes: float64(5)
memory usage: 169.6 KB
None
[1mSummary of the basic information about Y_train[0m
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4339 entries, 0 to 4338
Data columns (total 1 columns):
0 4339 non-null int64
dtypes: int64(1)
memory usage: 34.0 KB
None
[1mSummary of the basic information about X_train[0m
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 500 entries, 0 to 499
Data columns (total 5 columns):
0 500 non-null float64
1 500 non-null float64
2 500 non-null float64
3 500 non-null float64
4 500 non-null float64
dtypes: float64(5)
memory usage: 19.6 KB
None
<jupyter_text>## Train the dataset<jupyter_code>x = []
for i in range(10):
#Train decision tree with max depth from 1 to 10
clf = tree.DecisionTreeClassifier(criterion = 'entropy', max_depth = i+1)
clf.fit(X_train, Y_train)
#Calculate decision accuracy score of training dataset and validation dataset
x1 = clf.score(X_train, Y_train)
x2 = clf.score(X_test, Y_test)
#Save the errors
x.append([i+1, 1 - x1, 1 - x2])
x = pd.DataFrame(data = x, columns = ['Depth', 'Train', 'Test'])<jupyter_output><empty_output><jupyter_text>## Tree Depth vs Misclassification Errors<jupyter_code># Plot the results
plt.plot(x['Depth'], x['Train'], 'r--', x['Depth'], x['Test'])
plt.xlabel('Depth')
plt.ylabel('Misclassification errors')
plt.legend(('Train', 'Test'))
plt.show()
# Make a table
print(color.BOLD + "Depth vs Misclassification errors" + color.END)
print(x)
print(color.BOLD + "\nThe minimum misclassification error" + color.END)
print(x.loc[x['Test'].idxmin()])<jupyter_output>[1mDepth vs Misclassification errors[0m
Depth Train Test
0 1 0.017055 0.374
1 2 0.017055 0.374
2 3 0.009219 0.212
3 4 0.007605 0.208
4 5 0.004840 0.162
5 6 0.002766 0.158
6 7 0.001613 0.166
7 8 0.000691 0.156
8 9 0.000230 0.152
9 10 0.000230 0.172
[1m
The minimum misclassification error[0m
Depth 9.00000
Train 0.00023
Test 0.15200
Name: 8, dtype: float64
<jupyter_text>Even when the maximum depths of the decision trees are not deep, misclassification errors are very small in the training dataset (at most 1.7%). However, misclassification errors in the test dataset are at least 10 times higher than those errors in the training dataset.<jupyter_code>print(color.BOLD + 'Train set information' + color.END)
print(Y_train.describe())
print(color.BOLD + '\n\nTest set information' + color.END)
print(Y_test.describe())<jupyter_output>[1mTrain set information[0m
0
count 4339.000000
mean 0.017055
std 0.129490
min 0.000000
25% 0.000000
50% 0.000000
75% 0.000000
max 1.000000
[1m
Test set information[0m
0
count 500.000000
mean 0.374000
std 0.484348
min 0.000000
25% 0.000000
50% 0.000000
75% 1.000000
max 1.000000
<jupyter_text>The train set and test set have different statistics. Only 1.7% of the train set equals 1 but 37.4% of the test set is 1. Therefore, the test set is biased and it cannot be used to test the algorithm.
# gisette dataset
## Load the dataset<jupyter_code>X_train = pd.read_csv('gisette\gisette_train.data', header = None, \
delimiter = ' ').dropna(axis='columns')
print(color.BOLD + 'Summary of the basic information about X_train' + color.END)
print(X_train.info())
Y_train = pd.read_csv('gisette\gisette_train.labels', header = None)
print(color.BOLD + 'Summary of the basic information about Y_train' + color.END)
print(Y_train.info())
X_valid = pd.read_csv('gisette\gisette_valid.data', header = None, \
delimiter = ' ').dropna(axis='columns')
Y_valid = pd.read_csv('gisette\gisette_valid.labels', header = None)<jupyter_output>[1mSummary of the basic information about X_train[0m
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 6000 entries, 0 to 5999
Columns: 5000 entries, 0 to 4999
dtypes: int64(5000)
memory usage: 228.9 MB
None
[1mSummary of the basic information about Y_train[0m
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 6000 entries, 0 to 5999
Data columns (total 1 columns):
0 6000 non-null int64
dtypes: int64(1)
memory usage: 47.0 KB
None
<jupyter_text>## Train the dataset<jupyter_code>x = []
for i in range(6):
#Train decision tree with max depth from 1 to 6
clf = tree.DecisionTreeClassifier(criterion = 'entropy', max_depth = i+1)
clf.fit(X_train, Y_train)
#Calculate decision accuracy score of training dataset and validation dataset
x1 = clf.score(X_train, Y_train)
x2 = clf.score(X_valid, Y_valid)
#Save the errors
x.append([i+1, 1 - x1, 1 - x2])
x = pd.DataFrame(data = x, columns = ['Depth', 'Train', 'Valid'])<jupyter_output><empty_output><jupyter_text>## Tree Depth vs Misclassification Errors<jupyter_code># Plot the results
plt.plot(x['Depth'], x['Train'], 'r--', x['Depth'], x['Valid'])
plt.xlabel('Depth')
plt.ylabel('Misclassification errors')
plt.legend(('Train', 'Valid'))
plt.show()
# Make a table
print(color.BOLD + "Depth vs Misclassification errors" + color.END)
print(x)
print(color.BOLD + "\nThe minimum misclassification error" + color.END)
print(x.loc[x['Valid'].idxmin()])<jupyter_output>[1mDepth vs Misclassification errors[0m
Depth Train Valid
0 1 0.162000 0.169
1 2 0.122167 0.139
2 3 0.082500 0.095
3 4 0.068500 0.085
4 5 0.051500 0.087
5 6 0.039000 0.083
[1m
The minimum misclassification error[0m
Depth 6.000
Train 0.039
Valid 0.083
Name: 5, dtype: float64
|
no_license
|
/Assignment 1/Machine_Learning_Project1.ipynb
|
adr15c/STA5635-Machine-Learning-
| 10 |
<jupyter_start><jupyter_text># Discussion 01: Python Basics and Causality
Welcome to Discussion 01! This week, we will go over some Python Basics. You can find additional help on these topics in the course [textbook](https://eldridgejm.github.io/dive_into_data_science/front.html).
Additionally, [here](https://ucsd-ets.github.io/dsc10-2020-fa/published/default/reference/babypandas-reference.pdf) is a potentially useful reference sheet that contains several data wrangling tips.
I also highly recommend checking out [this](https://nationalzoo.si.edu/webcams/panda-cam) baby pandas resource as well.
Afterward, we will be talking about how to __establish causation__.<jupyter_code># please don't change this cell, but do make sure to run it
import babypandas as bpd
import matplotlib.pyplot as plt
import numpy as np
import math
import otter
grader = otter.Notebook()
from notebook.services.config import ConfigManager
cm = ConfigManager()
cm.update(
"livereveal", {
"width": "90%",
"height": "90%",
"scroll": True,
})<jupyter_output><empty_output><jupyter_text>## Jupyter Notebook Shortcuts
shift+enter: run cell and move focus to cell below
ctl+enter: run cell and keep focus on cell
Command Mode (cell is blue):
x: cut the cell, also quick way to delete
c: copy the cell
v: paste the cell
d+d: delete cell
a: make new cell above
b: make new cell below
y: change cell to code
m: change cell to markdown
enter: start editing cell
Editing Mode (cell is green):
esc: enter command mode
shift+tab: info about a function# What we'll cover:
---
- What is Python?
- Primitive Types# What is Python?
---
Python is a **high-level**, **interpreted** programming language invented by Guido Van Rossum in 1991. It is a powerful language while remaining **dynamically-typed**, easily **readable**, and has plenty of **whitespace**.- Interpreted:
- A file or cell can run instantly; does not need to compile to another file- Dynamically Typed:
- Python infers what type you want a variable to be; you don't tell it explicitly- Readable:
- Simply reading code aloud should largely reveal what's going on- Whitespace:
- You can *and should* use multiple lines to fit the `Python a e s t h e t i c`# Data Types in Python
---
Everything in Python has a type.Some things are really simple—you could call them *"primitive"*.
These things have a specific value.
There are four types of primitives:
- Integers (ex. 1, 2, -12)
- Floats (ex. 1.0, 3.5, -0.34)
- Strings ("this is a string", "a", "b")
- Booleans (True, False)Other things are a bit more complex.
These things act more like containers for values (or more containers).
Some examples include:
- Lists
- Arrays
- Tables
- Dictionaries
- Sets
We will cover these next week### Primitive Types: integers, floats, strings, booleans<jupyter_code># Integers
type(65)
# Floats
type(1.0)
# Strings
type("Hello")
# Booleans (True or False)
type(False) <jupyter_output><empty_output><jupyter_text>What are some things we can do with these primitive types?<jupyter_code># Let's do some testing together... here's a couple to start with:
3 + 5 # Can we do this?
3 + 5.9876 # What about this?
# How about this?
# 3 + "string"
# or this?
"string" + "another string"
# Feel free to play around with different types and see what else is possible!<jupyter_output><empty_output>
|
no_license
|
/Discussions/Discussion1/discussion.ipynb
|
ucsd-ets/dsc10-wi21
| 3 |
<jupyter_start><jupyter_text>https://matplotlib.org/3.1.1/tutorials/introductory/pyplot.html<jupyter_code>import pandas as pd
import matplotlib.pyplot as plt
import math
def truncate(n, decimals=0):
multiplier = 10 ** decimals
return int(n * multiplier) / multiplier
df1=pd.read_csv("profile-200-exp8.xv", skiprows=(16), delimiter="\t",names=["x","y"])
df2=pd.read_csv("profile-200-exp9.xv", skiprows=(16), delimiter="\t",names=["x","y"])<jupyter_output><empty_output><jupyter_text>Verifica as primeiras linhas, se importou direito<jupyter_code>print (df1.head(2))
print (df2.head(2))<jupyter_output> x y
0 0.939937 0.000000
1 0.980857 7.530981
x y
0 -9.420371 0.000000
1 -9.378688 1.390916
<jupyter_text>Plota dados 'raw'<jupyter_code>fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
ax1.plot(df1["x"],df1["y"])
ax1.plot(df2["x"],df2["y"])
plt.show()<jupyter_output><empty_output><jupyter_text>Tira os pontos das pontas:<jupyter_code>#df1["x"]=df.loc[9:390,"x"]<jupyter_output><empty_output><jupyter_text>Ajustes em X:<jupyter_code>df1["x"]=df1["x"] + 15.1
df2["x"]=df2["x"] + 34<jupyter_output><empty_output><jupyter_text>Coloca a fase gás em ZERO:<jupyter_code>df["y"]=df["y"] +0<jupyter_output><empty_output><jupyter_text>Imprime o máximo e o mínimo da curva<jupyter_code>#perfil200.min()
y_min=truncate(df["y"].min(),2)
y_max=truncate(df["y"].max(),2)
delta=truncate((y_max - y_min),2)
print('min:',y_min, 'max:',y_max,'delta:',delta)<jupyter_output>min: -25.73 max: 24.64 delta: 50.37
<jupyter_text>gera o gráfico<jupyter_code>fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
ax1.plot(df1["x"],df1["y"],'k')
ax1.plot(df2["x"],df2["y"],'r')
plt.show()<jupyter_output><empty_output><jupyter_text>Aqui plot usando o Pandas:<jupyter_code>p = df2.plot.line(x="x",y="y", color="black", grid=True, legend=False,figsize=(12,6))
p.set_title("PMF exp N-bulk", size=40)
p.set_xlabel("Distance / nm", size=20)
p.set_ylabel("Energy / KJ.mol-1", size=20)
p.text(3,-2,'min:'+str(y_min), size=20)
p.text(3,-2.5,'max:'+str(y_max),size=20)
p.text(3,-3,'delta:'+str(delta),size=20)
p.tick_params(labelsize=20)
plt.show()<jupyter_output><empty_output><jupyter_text>Salva o plot em arquivo<jupyter_code>p.get_figure().savefig("pmfEXP-N-bulk.20200904.png")<jupyter_output><empty_output>
|
permissive
|
/umbrella-sampling/jupyter-notebook/.ipynb_checkpoints/PMF-checkpoint.ipynb
|
wirttipereira/utils-md
| 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.